Sample records for search method called

  1. Environmental Visualization and Horizontal Fusion

    DTIC Science & Technology

    2005-10-01

    the section on EVIS Rules. Federated Search – Discovering Content Another method of discovering services and their content has been implemented...in HF through a next-generation knowledge discovery framework called Federated Search . A virtual information space, called Collateral Space was...environmental mission effects products, is presented later in the paper. Federated Search allows users to search through Collateral Space data that is

  2. A comparison of two search methods for determining the scope of systematic reviews and health technology assessments.

    PubMed

    Forsetlund, Louise; Kirkehei, Ingvild; Harboe, Ingrid; Odgaard-Jensen, Jan

    2012-01-01

    This study aims to compare two different search methods for determining the scope of a requested systematic review or health technology assessment. The first method (called the Direct Search Method) included performing direct searches in the Cochrane Database of Systematic Reviews (CDSR), Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessments (HTA). Using the comparison method (called the NHS Search Engine) we performed searches by means of the search engine of the British National Health Service, NHS Evidence. We used an adapted cross-over design with a random allocation of fifty-five requests for systematic reviews. The main analyses were based on repeated measurements adjusted for the order in which the searches were conducted. The Direct Search Method generated on average fewer hits (48 percent [95 percent confidence interval {CI} 6 percent to 72 percent], had a higher precision (0.22 [95 percent CI, 0.13 to 0.30]) and more unique hits than when searching by means of the NHS Search Engine (50 percent [95 percent CI, 7 percent to 110 percent]). On the other hand, the Direct Search Method took longer (14.58 minutes [95 percent CI, 7.20 to 21.97]) and was perceived as somewhat less user-friendly than the NHS Search Engine (-0.60 [95 percent CI, -1.11 to -0.09]). Although the Direct Search Method had some drawbacks such as being more time-consuming and less user-friendly, it generated more unique hits than the NHS Search Engine, retrieved on average fewer references and fewer irrelevant results.

  3. Efficient multifeature index structures for music data retrieval

    NASA Astrophysics Data System (ADS)

    Lee, Wegin; Chen, Arbee L. P.

    1999-12-01

    In this paper, we propose four index structures for music data retrieval. Based on suffix trees, we develop two index structures called combined suffix tree and independent suffix trees. These methods still show shortcomings for some search functions. Hence we develop another index, called Twin Suffix Trees, to overcome these problems. However, the Twin Suffix Trees lack of scalability when the amount of music data becomes large. Therefore we propose the fourth index, called Grid-Twin Suffix Trees, to provide scalability and flexibility for a large amount of music data. For each index, we can use different search functions, like exact search and approximate search, on different music features, like melody, rhythm or both. We compare the performance of the different search functions applied on each index structure by a series of experiments.

  4. Iterative repair for scheduling and rescheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Deale, Michael

    1991-01-01

    An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.

  5. Dynamic pattern matcher using incomplete data

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G. (Inventor); Wang, Lui (Inventor)

    1993-01-01

    This invention relates generally to pattern matching systems, and more particularly to a method for dynamically adapting the system to enhance the effectiveness of a pattern match. Apparatus and methods for calculating the similarity between patterns are known. There is considerable interest, however, in the storage and retrieval of data, particularly, when the search is called or initiated by incomplete information. For many search algorithms, a query initiating a data search requires exact information, and the data file is searched for an exact match. Inability to find an exact match thus results in a failure of the system or method.

  6. Bomb Threats and Bomb Search Techniques.

    ERIC Educational Resources Information Center

    Department of the Treasury, Washington, DC.

    This pamphlet explains how to be prepared and plan for bomb threats and describes procedures to follow once a call has been received. The content covers (1) preparation for bomb threats, (2) evacuation procedures, (3) room search methods, (4) procedures to follow once a bomb has been located, and (5) typical problems that search teams will…

  7. Comparative homology agreement search: An effective combination of homology-search methods

    PubMed Central

    Alam, Intikhab; Dress, Andreas; Rehmsmeier, Marc; Fuellen, Georg

    2004-01-01

    Many methods have been developed to search for homologous members of a protein family in databases, and the reliability of results and conclusions may be compromised if only one method is used, neglecting the others. Here we introduce a general scheme for combining such methods. Based on this scheme, we implemented a tool called comparative homology agreement search (chase) that integrates different search strategies to obtain a combined “E value.” Our results show that a consensus method integrating distinct strategies easily outperforms any of its component algorithms. More specifically, an evaluation based on the Structural Classification of Proteins database reveals that, on average, a coverage of 47% can be obtained in searches for distantly related homologues (i.e., members of the same superfamily but not the same family, which is a very difficult task), accepting only 10 false positives, whereas the individual methods obtain a coverage of 28–38%. PMID:15367730

  8. The After-Death Call to Family Members: Academic Perspectives

    ERIC Educational Resources Information Center

    LoboPrabhu, Sheila; Molinari, Victor; Pate, Jennifer; Lomax, James

    2008-01-01

    Objective: The authors discuss clinical and teaching aspects of a telephone call by the treating clinician to family members after a patient dies. Methods: A MEDLINE search was conducted for references to an after-death call made by the treating clinician to family members. A review of this literature is summarized. Results: A clinical application…

  9. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    NASA Astrophysics Data System (ADS)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  10. Using scenario-based training to promote information literacy among on-call consultant pediatricians

    PubMed Central

    Pettersson, Jonas; Bjorkander, Emil; Bark, Sirpa; Holmgren, Daniel; Wekell, Per

    2017-01-01

    Background Traditionally, teaching hospital staff to search for medical information relies heavily on educator-defined search methods. In contrast, the authors describe our experiences using real-time scenarios to teach on-call consultant pediatricians information literacy skills as part of a two-year continuing professional development program. Case Presentation Two information-searching workshops were held at Sahlgrenska University Hospital in Gothenburg, Sweden. During the workshops, pediatricians were presented with medical scenarios that were closely related to their clinical practice. Participants were initially encouraged to solve the problems using their own preferred search methods, followed by group discussions led by clinical educators and a medical librarian in which search problems were identified and overcome. The workshops were evaluated using questionnaires to assess participant satisfaction and the extent to which participants intended to implement changes in their clinical practice and reported actual change. Conclusions A scenario-based approach to teaching clinicians how to search for medical information is an attractive alternative to traditional lectures. The relevance of such an approach was supported by a high level of participant engagement during the workshops and high scores for participant satisfaction, intended changes to clinical practice, and reported benefits in actual clinical practice. PMID:28670215

  11. Using scenario-based training to promote information literacy among on-call consultant pediatricians.

    PubMed

    Pettersson, Jonas; Bjorkander, Emil; Bark, Sirpa; Holmgren, Daniel; Wekell, Per

    2017-07-01

    Traditionally, teaching hospital staff to search for medical information relies heavily on educator-defined search methods. In contrast, the authors describe our experiences using real-time scenarios to teach on-call consultant pediatricians information literacy skills as part of a two-year continuing professional development program. Two information-searching workshops were held at Sahlgrenska University Hospital in Gothenburg, Sweden. During the workshops, pediatricians were presented with medical scenarios that were closely related to their clinical practice. Participants were initially encouraged to solve the problems using their own preferred search methods, followed by group discussions led by clinical educators and a medical librarian in which search problems were identified and overcome. The workshops were evaluated using questionnaires to assess participant satisfaction and the extent to which participants intended to implement changes in their clinical practice and reported actual change. A scenario-based approach to teaching clinicians how to search for medical information is an attractive alternative to traditional lectures. The relevance of such an approach was supported by a high level of participant engagement during the workshops and high scores for participant satisfaction, intended changes to clinical practice, and reported benefits in actual clinical practice.

  12. Epsilon-Q: An Automated Analyzer Interface for Mass Spectral Library Search and Label-Free Protein Quantification.

    PubMed

    Cho, Jin-Young; Lee, Hyoung-Joo; Jeong, Seul-Ki; Paik, Young-Ki

    2017-12-01

    Mass spectrometry (MS) is a widely used proteome analysis tool for biomedical science. In an MS-based bottom-up proteomic approach to protein identification, sequence database (DB) searching has been routinely used because of its simplicity and convenience. However, searching a sequence DB with multiple variable modification options can increase processing time, false-positive errors in large and complicated MS data sets. Spectral library searching is an alternative solution, avoiding the limitations of sequence DB searching and allowing the detection of more peptides with high sensitivity. Unfortunately, this technique has less proteome coverage, resulting in limitations in the detection of novel and whole peptide sequences in biological samples. To solve these problems, we previously developed the "Combo-Spec Search" method, which uses manually multiple references and simulated spectral library searching to analyze whole proteomes in a biological sample. In this study, we have developed a new analytical interface tool called "Epsilon-Q" to enhance the functions of both the Combo-Spec Search method and label-free protein quantification. Epsilon-Q performs automatically multiple spectral library searching, class-specific false-discovery rate control, and result integration. It has a user-friendly graphical interface and demonstrates good performance in identifying and quantifying proteins by supporting standard MS data formats and spectrum-to-spectrum matching powered by SpectraST. Furthermore, when the Epsilon-Q interface is combined with the Combo-Spec search method, called the Epsilon-Q system, it shows a synergistic function by outperforming other sequence DB search engines for identifying and quantifying low-abundance proteins in biological samples. The Epsilon-Q system can be a versatile tool for comparative proteome analysis based on multiple spectral libraries and label-free quantification.

  13. Fast Open-World Person Re-Identification.

    PubMed

    Zhu, Xiatian; Wu, Botong; Huang, Dongcheng; Zheng, Wei-Shi

    2018-05-01

    Existing person re-identification (re-id) methods typically assume that: 1) any probe person is guaranteed to appear in the gallery target population during deployment (i.e., closed-world) and 2) the probe set contains only a limited number of people (i.e., small search scale). Both assumptions are artificial and breached in real-world applications, since the probe population in target people search can be extremely vast in practice due to the ambiguity of probe search space boundary. Therefore, it is unrealistic that any probe person is assumed as one target people, and a large-scale search in person images is inherently demanded. In this paper, we introduce a new person re-id search setting, called large scale open-world (LSOW) re-id, characterized by huge size probe images and open person population in search thus more close to practical deployments. Under LSOW, the under-studied problem of person re-id efficiency is essential in addition to that of commonly studied re-id accuracy. We, therefore, develop a novel fast person re-id method, called Cross-view Identity Correlation and vErification (X-ICE) hashing, for joint learning of cross-view identity representation binarisation and discrimination in a unified manner. Extensive comparative experiments on three large-scale benchmarks have been conducted to validate the superiority and advantages of the proposed X-ICE method over a wide range of the state-of-the-art hashing models, person re-id methods, and their combinations.

  14. Spectrum-based method to generate good decoy libraries for spectral library searching in peptide identifications.

    PubMed

    Cheng, Chia-Ying; Tsai, Chia-Feng; Chen, Yu-Ju; Sung, Ting-Yi; Hsu, Wen-Lian

    2013-05-03

    As spectral library searching has received increasing attention for peptide identification, constructing good decoy spectra from the target spectra is the key to correctly estimating the false discovery rate in searching against the concatenated target-decoy spectral library. Several methods have been proposed to construct decoy spectral libraries. Most of them construct decoy peptide sequences and then generate theoretical spectra accordingly. In this paper, we propose a method, called precursor-swap, which directly constructs decoy spectral libraries directly at the "spectrum level" without generating decoy peptide sequences by swapping the precursors of two spectra selected according to a very simple rule. Our spectrum-based method does not require additional efforts to deal with ion types (e.g., a, b or c ions), fragment mechanism (e.g., CID, or ETD), or unannotated peaks, but preserves many spectral properties. The precursor-swap method is evaluated on different spectral libraries and the results of obtained decoy ratios show that it is comparable to other methods. Notably, it is efficient in time and memory usage for constructing decoy libraries. A software tool called Precursor-Swap-Decoy-Generation (PSDG) is publicly available for download at http://ms.iis.sinica.edu.tw/PSDG/.

  15. Searching for evidence or approval? A commentary on database search in systematic reviews and alternative information retrieval methodologies.

    PubMed

    Delaney, Aogán; Tamás, Peter A

    2018-03-01

    Despite recognition that database search alone is inadequate even within the health sciences, it appears that reviewers in fields that have adopted systematic review are choosing to rely primarily, or only, on database search for information retrieval. This commentary reminds readers of factors that call into question the appropriateness of default reliance on database searches particularly as systematic review is adapted for use in new and lower consensus fields. It then discusses alternative methods for information retrieval that require development, formalisation, and evaluation. Our goals are to encourage reviewers to reflect critically and transparently on their choice of information retrieval methods and to encourage investment in research on alternatives. Copyright © 2017 John Wiley & Sons, Ltd.

  16. PISA: Federated Search in P2P Networks with Uncooperative Peers

    NASA Astrophysics Data System (ADS)

    Ren, Zujie; Shou, Lidan; Chen, Gang; Chen, Chun; Bei, Yijun

    Recently, federated search in P2P networks has received much attention. Most of the previous work assumed a cooperative environment where each peer can actively participate in information publishing and distributed document indexing. However, little work has addressed the problem of incorporating uncooperative peers, which do not publish their own corpus statistics, into a network. This paper presents a P2P-based federated search framework called PISA which incorporates uncooperative peers as well as the normal ones. In order to address the indexing needs for uncooperative peers, we propose a novel heuristic query-based sampling approach which can obtain high-quality resource descriptions from uncooperative peers at relatively low communication cost. We also propose an effective method called RISE to merge the results returned by uncooperative peers. Our experimental results indicate that PISA can provide quality search results, while utilizing the uncooperative peers at a low cost.

  17. PIRIA: a general tool for indexing, search, and retrieval of multimedia content

    NASA Astrophysics Data System (ADS)

    Joint, Magali; Moellic, Pierre-Alain; Hede, P.; Adam, P.

    2004-05-01

    The Internet is a continuously expanding source of multimedia content and information. There are many products in development to search, retrieve, and understand multimedia content. But most of the current image search/retrieval engines, rely on a image database manually pre-indexed with keywords. Computers are still powerless to understand the semantic meaning of still or animated image content. Piria (Program for the Indexing and Research of Images by Affinity), the search engine we have developed brings this possibility closer to reality. Piria is a novel search engine that uses the query by example method. A user query is submitted to the system, which then returns a list of images ranked by similarity, obtained by a metric distance that operates on every indexed image signature. These indexed images are compared according to several different classifiers, not only Keywords, but also Form, Color and Texture, taking into account geometric transformations and variance like rotation, symmetry, mirroring, etc. Form - Edges extracted by an efficient segmentation algorithm. Color - Histogram, semantic color segmentation and spatial color relationship. Texture - Texture wavelets and local edge patterns. If required, Piria is also able to fuse results from multiple classifiers with a new classification of index categories: Single Indexer Single Call (SISC), Single Indexer Multiple Call (SIMC), Multiple Indexers Single Call (MISC) or Multiple Indexers Multiple Call (MIMC). Commercial and industrial applications will be explored and discussed as well as current and future development.

  18. A Practical Guide to Calibration of a GSSHA Hydrologic Model Using ERDC Automated Model Calibration Software - Efficient Local Search

    DTIC Science & Technology

    2012-02-01

    use the ERDC software implementation of the secant LM method that accommodates the PEST model independent interface to calibrate a GSSHA...how the method works. We will also demonstrate how our LM/SLM implementation compares with its counterparts as implemented in the popular PEST ...function values and total model calls for local search to converge) associated with Examples 1 and 3 using the PEST LM/SLM implementations

  19. Biclustering of gene expression data using reactive greedy randomized adaptive search procedure.

    PubMed

    Dharan, Smitha; Nair, Achuthsankar S

    2009-01-30

    Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts.

  20. Active Solution Space and Search on Job-shop Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Watanabe, Masato; Ida, Kenichi; Gen, Mitsuo

    In this paper we propose a new searching method of Genetic Algorithm for Job-shop scheduling problem (JSP). The coding method that represent job number in order to decide a priority to arrange a job to Gannt Chart (called the ordinal representation with a priority) in JSP, an active schedule is created by using left shift. We define an active solution at first. It is solution which can create an active schedule without using left shift, and set of its defined an active solution space. Next, we propose an algorithm named Genetic Algorithm with active solution space search (GA-asol) which can create an active solution while solution is evaluated, in order to search the active solution space effectively. We applied it for some benchmark problems to compare with other method. The experimental results show good performance.

  1. Sachem: a chemical cartridge for high-performance substructure search.

    PubMed

    Kratochvíl, Miroslav; Vondrášek, Jiří; Galgonek, Jakub

    2018-05-23

    Structure search is one of the valuable capabilities of small-molecule databases. Fingerprint-based screening methods are usually employed to enhance the search performance by reducing the number of calls to the verification procedure. In substructure search, fingerprints are designed to capture important structural aspects of the molecule to aid the decision about whether the molecule contains a given substructure. Currently available cartridges typically provide acceptable search performance for processing user queries, but do not scale satisfactorily with dataset size. We present Sachem, a new open-source chemical cartridge that implements two substructure search methods: The first is a performance-oriented reimplementation of substructure indexing based on the OrChem fingerprint, and the second is a novel method that employs newly designed fingerprints stored in inverted indices. We assessed the performance of both methods on small, medium, and large datasets containing 1, 10, and 94 million compounds, respectively. Comparison of Sachem with other freely available cartridges revealed improvements in overall performance, scaling potential and screen-out efficiency. The Sachem cartridge allows efficient substructure searches in databases of all sizes. The sublinear performance scaling of the second method and the ability to efficiently query large amounts of pre-extracted information may together open the door to new applications for substructure searches.

  2. Effectiveness of a mass media campaign in promoting HIV testing information seeking among African American women.

    PubMed

    Davis, Kevin C; Uhrig, Jennifer; Rupert, Douglas; Fraze, Jami; Goetz, Joshua; Slater, Michael

    2011-10-01

    "Take Charge. Take the Test." (TCTT), a media campaign promoting HIV testing among African American women, was piloted in Cleveland and Philadelphia from October 2006 to October 2007. This study assesses TCTT's effectiveness in promoting HIV testing information seeking among target audiences in each pilot city. The authors analyzed data on telephone hotlines promoted by the campaign and the www.hivtest.org Web site to examine trends in hotline calls and testing location searches before, during, and after the campaign. Cleveland hotline data were available from October 1, 2005, through February 28, 2008, for a total of 29 months (N = 126 weeks). Philadelphia hotline data were available from May 1, 2006, through February 28, 2008, for a total of 22 months (N = 96 weeks). The authors assessed the relation between market-level measures of the campaign's advertising activities and trends in hotline call volume and testing location searches. They found a significant relation between measures of TCTT advertising and hotline calls. Specifically, they found that increases in advertising gross ratings points were associated with increases in call volume, controlling for caller demographics and geographic location. The campaign had similar effects on HIV testing location searches. Overall, it appears the campaign generated significant increases in HIV information seeking. Results are consistent with other studies that have evaluated the effects of media campaigns on similar forms of information seeking. This study illustrates useful methods for evaluating campaign effects on information seeking with data on media implementation, hotline calls, and zip code-based searches for testing locations.

  3. A comparison of survey methods for documenting presence of Myotis leibii (Eastern Small-Footed Bats) at roosting areas in Western Virginia

    USGS Publications Warehouse

    Huth, John K.; Silvis, Alexander; Moosman, Paul R.; Ford, W. Mark; Sweeten, Sara E.

    2015-01-01

    Many aspects of foraging and roosting habitat of Myotis leibii (Eastern Small-Footed Bat), an emergent rock roosting-obligate, are poorly described. Previous comparisons of effectiveness of acoustic sampling and mist-net captures have not included Eastern Small-Footed Bat. Habitat requirements of this species differ from congeners in the region, and it is unclear whether survey protocols developed for other species are applicable. Using data from three overlapping studies at two sampling sites in western Virginia’s central Appalachian Mountains, detection probabilities were examined for three survey methods (acoustic surveys with automated identification of calls, visual searches of rock crevices, and mist-netting) for use in the development of “best practices” for future surveys and monitoring. Observer effects were investigated using an expanded version of visual search data. Results suggested that acoustic surveys with automated call identification are not effective for documenting presence of Eastern Small-Footed Bats on talus slopes (basal detection rate of 0%) even when the species is known to be present. The broadband, high frequency echolocation calls emitted by Eastern Small-Footed Bat may be prone to attenuation by virtue of their high frequencies, and these factors, along with signal reflection, lower echolocation rates or possible misidentification to other bat species over talus slopes may all have contributed to poor acoustic survey success. Visual searches and mist-netting of emergent rock had basal detection probabilities of 91% and 75%, respectively. Success of visual searches varied among observers, but detection probability improved with practice. Additionally, visual searches were considerably more economical than mist-netting.

  4. Cooperative quantum-behaved particle swarm optimization with dynamic varying search areas and Lévy flight disturbance.

    PubMed

    Li, Desheng

    2014-01-01

    This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem.

  5. Local search heuristic for the discrete leader-follower problem with multiple follower objectives

    NASA Astrophysics Data System (ADS)

    Kochetov, Yury; Alekseeva, Ekaterina; Mezmaz, Mohand

    2016-10-01

    We study a discrete bilevel problem, called as well as leader-follower problem, with multiple objectives at the lower level. It is assumed that constraints at the upper level can include variables of both levels. For such ill-posed problem we define feasible and optimal solutions for pessimistic case. A central point of this work is a two stage method to get a feasible solution under the pessimistic case, given a leader decision. The target of the first stage is a follower solution that violates the leader constraints. The target of the second stage is a pessimistic feasible solution. Each stage calls a heuristic and a solver for a series of particular mixed integer programs. The method is integrated inside a local search based heuristic that is designed to find near-optimal leader solutions.

  6. Optimum tuned mass damper design using harmony search with comparison of classical methods

    NASA Astrophysics Data System (ADS)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail; Sayin, Baris

    2017-07-01

    As known, tuned mass dampers (TMDs) are added to mechanical systems in order to obtain a good vibration damping. The main aim is to reduce the maximum amplitude at the resonance state. In this study, a metaheuristic algorithm called harmony search employed for the optimum design of TMDs. As the optimization objective, the transfer function of the acceleration of the system with respect to ground acceleration was minimized. The numerical trails were conducted for 4 single degree of freedom systems and the results were compared with classical methods. As a conclusion, the proposed method is feasible and more effective than the other documented methods.

  7. PROSPECT improves cis-acting regulatory element prediction by integrating expression profile data with consensus pattern searches

    PubMed Central

    Fujibuchi, Wataru; Anderson, John S. J.; Landsman, David

    2001-01-01

    Consensus pattern and matrix-based searches designed to predict cis-acting transcriptional regulatory sequences have historically been subject to large numbers of false positives. We sought to decrease false positives by incorporating expression profile data into a consensus pattern-based search method. We have systematically analyzed the expression phenotypes of over 6000 yeast genes, across 121 expression profile experiments, and correlated them with the distribution of 14 known regulatory elements over sequences upstream of the genes. Our method is based on a metric we term probabilistic element assessment (PEA), which is a ranking of potential sites based on sequence similarity in the upstream regions of genes with similar expression phenotypes. For eight of the 14 known elements that we examined, our method had a much higher selectivity than a naïve consensus pattern search. Based on our analysis, we have developed a web-based tool called PROSPECT, which allows consensus pattern-based searching of gene clusters obtained from microarray data. PMID:11574681

  8. Tracking fin whales in the northeast Pacific Ocean with a seafloor seismic network.

    PubMed

    Wilcock, William S D

    2012-10-01

    Ocean bottom seismometer (OBS) networks represent a tool of opportunity to study fin and blue whales. A small OBS network on the Juan de Fuca Ridge in the northeast Pacific Ocean in ~2.3 km of water recorded an extensive data set of 20-Hz fin whale calls. An automated method has been developed to identify arrival times based on instantaneous frequency and amplitude and to locate calls using a grid search even in the presence of a few bad arrival times. When only one whale is calling near the network, tracks can generally be obtained up to distances of ~15 km from the network. When the calls from multiple whales overlap, user supervision is required to identify tracks. The absolute and relative amplitudes of arrivals and their three-component particle motions provide additional constraints on call location but are not useful for extending the distance to which calls can be located. The double-difference method inverts for changes in relative call locations using differences in residuals for pairs of nearby calls recorded on a common station. The method significantly reduces the unsystematic component of the location error, especially when inconsistencies in arrival time observations are minimized by cross-correlation.

  9. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products.

    PubMed

    Miake-Lye, Isomi M; Hempel, Susanne; Shanman, Roberta; Shekelle, Paul G

    2016-02-10

    The need for systematic methods for reviewing evidence is continuously increasing. Evidence mapping is one emerging method. There are no authoritative recommendations for what constitutes an evidence map or what methods should be used, and anecdotal evidence suggests heterogeneity in both. Our objectives are to identify published evidence maps and to compare and contrast the presented definitions of evidence mapping, the domains used to classify data in evidence maps, and the form the evidence map takes. We conducted a systematic review of publications that presented results with a process termed "evidence mapping" or included a figure called an "evidence map." We identified publications from searches of ten databases through 8/21/2015, reference mining, and consulting topic experts. We abstracted the research question, the unit of analysis, the search methods and search period covered, and the country of origin. Data were narratively synthesized. Thirty-nine publications met inclusion criteria. Published evidence maps varied in their definition and the form of the evidence map. Of the 31 definitions provided, 67 % described the purpose as identification of gaps and 58 % referenced a stakeholder engagement process or user-friendly product. All evidence maps explicitly used a systematic approach to evidence synthesis. Twenty-six publications referred to a figure or table explicitly called an "evidence map," eight referred to an online database as the evidence map, and five stated they used a mapping methodology but did not present a visual depiction of the evidence. The principal conclusion of our evaluation of studies that call themselves "evidence maps" is that the implied definition of what constitutes an evidence map is a systematic search of a broad field to identify gaps in knowledge and/or future research needs that presents results in a user-friendly format, often a visual figure or graph, or a searchable database. Foundational work is needed to better standardize the methods and products of an evidence map so that researchers and policymakers will know what to expect of this new type of evidence review. Although an a priori protocol was developed, no registration was completed; this review did not fit the PROSPERO format.

  10. Cooperative Quantum-Behaved Particle Swarm Optimization with Dynamic Varying Search Areas and Lévy Flight Disturbance

    PubMed Central

    Li, Desheng

    2014-01-01

    This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem. PMID:24851085

  11. Biclustering of gene expression data using reactive greedy randomized adaptive search procedure

    PubMed Central

    Dharan, Smitha; Nair, Achuthsankar S

    2009-01-01

    Background Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. Results We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. Conclusion The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts. PMID:19208127

  12. Specializations for aerial hawking in the echolocation system of Molossus molossus (Molossidae, Chiroptera).

    PubMed

    Mora, E C; Macías, S; Vater, M; Coro, F; Kössl, M

    2004-07-01

    While searching for prey, Molossus molossus broadcasts narrow-band calls of 11.42 ms organized in pairs of pulses that alternate in frequency. The first signal of the pair is at 34.5 kHz, the second at 39.6 kHz. Pairs of calls with changing frequencies were only emitted when the interpulse intervals were below 200 ms. Maximum duty cycles during search phase are close to 20%. Frequency alternation of search calls is interpreted as a mechanism for increasing duty cycle and thus the temporal continuity of scanning, as well as increasing the detection range. A neurophysiological correlate for the processing of search calls was found in the inferior colliculus. 64% of neurons respond to frequencies in the 30- to 40-kHz range and only in this frequency range were closed tuning curves found for levels below 40 dB SPL. In addition, 15% of the neurons have double-tuned frequency-threshold curves with best thresholds at 34 and 39 kHz. Differing from observations in other bats, approach calls of M. molossus are longer and of higher frequencies than search calls. Close to the roost, the call frequency is increased to 45.0-49.8 kHz and, in addition, extremely broadband signals are emitted. This demonstrates high plasticity of call design.

  13. Molecule database framework: a framework for creating database applications with chemical structure search capability

    PubMed Central

    2013-01-01

    Background Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Results Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes: • Support for multi-component compounds (mixtures) • Import and export of SD-files • Optional security (authorization) For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures). Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. Conclusions By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework. PMID:24325762

  14. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    PubMed

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework.

  15. Modified harmony search

    NASA Astrophysics Data System (ADS)

    Mohamed, Najihah; Lutfi Amri Ramli, Ahmad; Majid, Ahmad Abd; Piah, Abd Rahni Mt

    2017-09-01

    A metaheuristic algorithm, called Harmony Search is quite highly applied in optimizing parameters in many areas. HS is a derivative-free real parameter optimization algorithm, and draws an inspiration from the musical improvisation process of searching for a perfect state of harmony. Propose in this paper Modified Harmony Search for solving optimization problems, which employs a concept from genetic algorithm method and particle swarm optimization for generating new solution vectors that enhances the performance of HS algorithm. The performances of MHS and HS are investigated on ten benchmark optimization problems in order to make a comparison to reflect the efficiency of the MHS in terms of final accuracy, convergence speed and robustness.

  16. Mass and Intrusive Searches of Students in Public Schools: A Legal Perspective and a Call for Critical Analyses.

    ERIC Educational Resources Information Center

    Stefkovich, Jacqueline A.

    1996-01-01

    In recent years, public school students have been searched with metal detectors and occasionally sniffed by dogs or strip searched. Their lockers and bookbags have been searched, and their urine has been tested for drugs--all in the name of school safety. This article explores the legal ramifications of such searches and calls for a critical…

  17. Colliding or co-rotating ion beams in storage rings for EDM search

    NASA Astrophysics Data System (ADS)

    Koop, I. A.

    2015-11-01

    A new approach to search for and measure the electric dipole moment (EDM) of the proton, deuteron and some other light nuclei is presented. The idea of the method is to store two ion beams, circulating with different velocities, in a storage ring with crossed electric and magnetic guiding fields. One beam is polarized and its EDM is measured using the so-called ‘frozen spin’ method. The second beam, which is unpolarized, is used as a co-magnetometer, sensitive to the radial component of the ring’s magnetic field. The particle’s magnetic dipole moment (MDM) couples to the radial magnetic field and mimics the EDM signal. Measuring the relative vertical orbit separation of the two beams, caused by the presence of the radial magnetic field, one can control the unwanted MDM spin precession. Examples of the parameters for EDM storage rings for protons and other species of ions are presented. The use of crossed electric and magnetic fields helps to reduce the size of the ring by a factor of 10-20. We show that the bending radius of such an EDM storage ring could be about 2-3 m. Finally, a new method of increasing the spin coherence time, the so-called ‘spin wheel’, is proposed and its applicability to the EDM search is discussed.

  18. GIRAF: a method for fast search and flexible alignment of ligand binding interfaces in proteins at atomic resolution

    PubMed Central

    Kinjo, Akira R.; Nakamura, Haruki

    2012-01-01

    Comparison and classification of protein structures are fundamental means to understand protein functions. Due to the computational difficulty and the ever-increasing amount of structural data, however, it is in general not feasible to perform exhaustive all-against-all structure comparisons necessary for comprehensive classifications. To efficiently handle such situations, we have previously proposed a method, now called GIRAF. We herein describe further improvements in the GIRAF protein structure search and alignment method. The GIRAF method achieves extremely efficient search of similar structures of ligand binding sites of proteins by exploiting database indexing of structural features of local coordinate frames. In addition, it produces refined atom-wise alignments by iterative applications of the Hungarian method to the bipartite graph defined for a pair of superimposed structures. By combining the refined alignments based on different local coordinate frames, it is made possible to align structures involving domain movements. We provide detailed accounts for the database design, the search and alignment algorithms as well as some benchmark results. PMID:27493524

  19. Interactive searching of facial image databases

    NASA Astrophysics Data System (ADS)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  20. Data-driven indexing mechanism for the recognition of polyhedral objects

    NASA Astrophysics Data System (ADS)

    McLean, Stewart; Horan, Peter; Caelli, Terry M.

    1992-02-01

    This paper is concerned with the problem of searching large model databases. To date, most object recognition systems have concentrated on the problem of matching using simple searching algorithms. This is quite acceptable when the number of object models is small. However, in the future, general purpose computer vision systems will be required to recognize hundreds or perhaps thousands of objects and, in such circumstances, efficient searching algorithms will be needed. The problem of searching a large model database is one which must be addressed if future computer vision systems are to be at all effective. In this paper we present a method we call data-driven feature-indexed hypothesis generation as one solution to the problem of searching large model databases.

  1. Improving sensitivity in proteome studies by analysis of false discovery rates for multiple search engines

    PubMed Central

    Jones, Andrew R.; Siepen, Jennifer A.; Hubbard, Simon J.; Paton, Norman W.

    2010-01-01

    Tandem mass spectrometry, run in combination with liquid chromatography (LC-MS/MS), can generate large numbers of peptide and protein identifications, for which a variety of database search engines are available. Distinguishing correct identifications from false positives is far from trivial because all data sets are noisy, and tend to be too large for manual inspection, therefore probabilistic methods must be employed to balance the trade-off between sensitivity and specificity. Decoy databases are becoming widely used to place statistical confidence in results sets, allowing the false discovery rate (FDR) to be estimated. It has previously been demonstrated that different MS search engines produce different peptide identification sets, and as such, employing more than one search engine could result in an increased number of peptides being identified. However, such efforts are hindered by the lack of a single scoring framework employed by all search engines. We have developed a search engine independent scoring framework based on FDR which allows peptide identifications from different search engines to be combined, called the FDRScore. We observe that peptide identifications made by three search engines are infrequently false positives, and identifications made by only a single search engine, even with a strong score from the source search engine, are significantly more likely to be false positives. We have developed a second score based on the FDR within peptide identifications grouped according to the set of search engines that have made the identification, called the combined FDRScore. We demonstrate by searching large publicly available data sets that the combined FDRScore can differentiate between between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine. PMID:19253293

  2. Future trends which will influence waste disposal.

    PubMed Central

    Wolman, A

    1978-01-01

    The disposal and management of solid wastes are ancient problems. The evolution of practices naturally changed as populations grew and sites for disposal became less acceptable. The central search was for easy disposal at minimum costs. The methods changed from indiscriminate dumping to sanitary landfill, feeding to swine, reduction, incineration, and various forms of re-use and recycling. Virtually all procedures have disabilities and rising costs. Many methods once abandoned are being rediscovered. Promises for so-called innovations outstrip accomplishments. Markets for salvage vary widely or disappear completely. The search for conserving materials and energy at minimum cost must go on forever. PMID:570105

  3. ParticleCall: A particle filter for base calling in next-generation sequencing systems

    PubMed Central

    2012-01-01

    Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067

  4. Adaptive rood pattern search for fast block-matching motion estimation.

    PubMed

    Nie, Yao; Ma, Kai-Kuang

    2002-01-01

    In this paper, we propose a novel and simple fast block-matching algorithm (BMA), called adaptive rood pattern search (ARPS), which consists of two sequential search stages: 1) initial search and 2) refined local search. For each macroblock (MB), the initial search is performed only once at the beginning in order to find a good starting point for the follow-up refined local search. By doing so, unnecessary intermediate search and the risk of being trapped into local minimum matching error points could be greatly reduced in long search case. For the initial search stage, an adaptive rood pattern (ARP) is proposed, and the ARP's size is dynamically determined for each MB, based on the available motion vectors (MVs) of the neighboring MBs. In the refined local search stage, a unit-size rood pattern (URP) is exploited repeatedly, and unrestrictedly, until the final MV is found. To further speed up the search, zero-motion prejudgment (ZMP) is incorporated in our method, which is particularly beneficial to those video sequences containing small motion contents. Extensive experiments conducted based on the MPEG-4 Verification Model (VM) encoding platform show that the search speed of our proposed ARPS-ZMP is about two to three times faster than that of the diamond search (DS), and our method even achieves higher peak signal-to-noise ratio (PSNR) particularly for those video sequences containing large and/or complex motion contents.

  5. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  6. Subspace projection method for unstructured searches with noisy quantum oracles using a signal-based quantum emulation device

    NASA Astrophysics Data System (ADS)

    La Cour, Brian R.; Ostrove, Corey I.

    2017-01-01

    This paper describes a novel approach to solving unstructured search problems using a classical, signal-based emulation of a quantum computer. The classical nature of the representation allows one to perform subspace projections in addition to the usual unitary gate operations. Although bandwidth requirements will limit the scale of problems that can be solved by this method, it can nevertheless provide a significant computational advantage for problems of limited size. In particular, we find that, for the same number of noisy oracle calls, the proposed subspace projection method provides a higher probability of success for finding a solution than does an single application of Grover's algorithm on the same device.

  7. Scare Tactics: Evaluating Problem Decompositions Using Failure Scenarios

    NASA Technical Reports Server (NTRS)

    Helm, B. Robert; Fickas, Stephen

    1992-01-01

    Our interest is in the design of multi-agent problem-solving systems, which we refer to as composite systems. We have proposed an approach to composite system design by decomposition of problem statements. An automated assistant called Critter provides a library of reusable design transformations which allow a human analyst to search the space of decompositions for a problem. In this paper we describe a method for evaluating and critiquing problem decompositions generated by this search process. The method uses knowledge stored in the form of failure decompositions attached to design transformations. We suggest the benefits of our critiquing method by showing how it could re-derive steps of a published development example. We then identify several open issues for the method.

  8. Amoeba-inspired nanoarchitectonic computing implemented using electrical Brownian ratchets.

    PubMed

    Aono, M; Kasai, S; Kim, S-J; Wakabayashi, M; Miwa, H; Naruse, M

    2015-06-12

    In this study, we extracted the essential spatiotemporal dynamics that allow an amoeboid organism to solve a computationally demanding problem and adapt to its environment, thereby proposing a nature-inspired nanoarchitectonic computing system, which we implemented using a network of nanowire devices called 'electrical Brownian ratchets (EBRs)'. By utilizing the fluctuations generated from thermal energy in nanowire devices, we used our system to solve the satisfiability problem, which is a highly complex combinatorial problem related to a wide variety of practical applications. We evaluated the dependency of the solution search speed on its exploration parameter, which characterizes the fluctuation intensity of EBRs, using a simulation model of our system called 'AmoebaSAT-Brownian'. We found that AmoebaSAT-Brownian enhanced the solution searching speed dramatically when we imposed some constraints on the fluctuations in its time series and it outperformed a well-known stochastic local search method. These results suggest a new computing paradigm, which may allow high-speed problem solving to be implemented by interacting nanoscale devices with low power consumption.

  9. Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm

    NASA Astrophysics Data System (ADS)

    Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi

    2014-01-01

    This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.

  10. The Gaussian CL s method for searches of new physics

    DOE PAGES

    Qian, X.; Tan, A.; Ling, J. J.; ...

    2016-04-23

    Here we describe a method based on the CL s approach to present results in searches of new physics, under the condition that the relevant parameter space is continuous. Our method relies on a class of test statistics developed for non-nested hypotheses testing problems, denoted by ΔT, which has a Gaussian approximation to its parent distribution when the sample size is large. This leads to a simple procedure of forming exclusion sets for the parameters of interest, which we call the Gaussian CL s method. Our work provides a self-contained mathematical proof for the Gaussian CL s method, that explicitlymore » outlines the required conditions. These conditions are milder than that required by the Wilks' theorem to set confidence intervals (CIs). We illustrate the Gaussian CL s method in an example of searching for a sterile neutrino, where the CL s approach was rarely used before. We also compare data analysis results produced by the Gaussian CL s method and various CI methods to showcase their differences.« less

  11. Re-ranking via User Feedback: Georgetown University at TREC 2015 DD Track

    DTIC Science & Technology

    2015-11-20

    Re-ranking via User Feedback: Georgetown University at TREC 2015 DD Track Jiyun Luo and Hui Yang Department of Computer Science, Georgetown...involved in a search process, the user and the search engine. In TREC DD , the user is modeled by a simulator, called “jig”. The jig and the search engine...simulating user is provided by TREC 2015 DD Track organizer, and is called “jig”. There are 118 search topics in total. For each search topic, a short

  12. Detecting the Edge of the Tongue: A Tutorial

    ERIC Educational Resources Information Center

    Iskarous, Khalil

    2005-01-01

    The goal of this paper is to provide a tutorial introduction to the topic of edge detection of the tongue from ultrasound scans for researchers in speech science and phonetics. The method introduced here is Active Contours (also called snakes), a method for searching for an edge, assuming that it is a smooth curve in the image data. The advantage…

  13. Unstructured mesh methods for CFD

    NASA Technical Reports Server (NTRS)

    Peraire, J.; Morgan, K.; Peiro, J.

    1990-01-01

    Mesh generation methods for Computational Fluid Dynamics (CFD) are outlined. Geometric modeling is discussed. An advancing front method is described. Flow past a two engine Falcon aeroplane is studied. An algorithm and associated data structure called the alternating digital tree, which efficiently solves the geometric searching problem is described. The computation of an initial approximation to the steady state solution of a given poblem is described. Mesh generation for transient flows is described.

  14. Information Science and Responsive Evaluation

    ERIC Educational Resources Information Center

    Stake, Robert E.

    2014-01-01

    Responsive evaluation builds upon the methods of informal evaluation in disciplined ways: getting personally acquainted with the evaluand, observation of activities, interviewing people who are in different ways familiar with the evaluand, searching documents that reveal what happened in the past or somewhere else. It calls for sustained effort to…

  15. Improving Upon String Methods for Transition State Discovery.

    PubMed

    Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker

    2012-02-14

    Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.

  16. Search for Minimal and Semi-Minimal Rule Sets in Incremental Learning of Context-Free and Definite Clause Grammars

    NASA Astrophysics Data System (ADS)

    Imada, Keita; Nakamura, Katsuhiko

    This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.

  17. Path Planning For A Class Of Cutting Operations

    NASA Astrophysics Data System (ADS)

    Tavora, Jose

    1989-03-01

    Optimizing processing time in some contour-cutting operations requires solving the so-called no-load path problem. This problem is formulated and an approximate resolution method (based on heuristic search techniques) is described. Results for real-life instances (clothing layouts in the apparel industry) are presented and evaluated.

  18. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System.

    PubMed

    Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun

    2015-01-01

    The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  19. D-score: a search engine independent MD-score.

    PubMed

    Vaudel, Marc; Breiter, Daniela; Beck, Florian; Rahnenführer, Jörg; Martens, Lennart; Zahedi, René P

    2013-03-01

    While peptides carrying PTMs are routinely identified in gel-free MS, the localization of the PTMs onto the peptide sequences remains challenging. Search engine scores of secondary peptide matches have been used in different approaches in order to infer the quality of site inference, by penalizing the localization whenever the search engine similarly scored two candidate peptides with different site assignments. In the present work, we show how the estimation of posterior error probabilities for peptide candidates allows the estimation of a PTM score called the D-score, for multiple search engine studies. We demonstrate the applicability of this score to three popular search engines: Mascot, OMSSA, and X!Tandem, and evaluate its performance using an already published high resolution data set of synthetic phosphopeptides. For those peptides with phosphorylation site inference uncertainty, the number of spectrum matches with correctly localized phosphorylation increased by up to 25.7% when compared to using Mascot alone, although the actual increase depended on the fragmentation method used. Since this method relies only on search engine scores, it can be readily applied to the scoring of the localization of virtually any modification at no additional experimental or in silico cost. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. The evolution of sex differences in mate searching when females benefit: new theory and a comparative test.

    PubMed

    McCartney, J; Kokko, H; Heller, K-G; Gwynne, D T

    2012-03-22

    Sexual selection is thought to have led to searching as a profitable, but risky way of males obtaining mates. While there is great variation in which sex searches, previous theory has not considered search evolution when both males and females benefit from multiple mating. We present new theory and link it with data to bridge this gap. Two different search protocols exist between species in the bush-cricket genus Poecilimon (Orthoptera): females search for calling males, or males search for calling females. Poecilimon males also transfer a costly nuptial food gift to their mates during mating. We relate variations in searching protocols to variation in nuptial gift size among 32 Poecilimon taxa. As predicted, taxa where females search produce significantly larger nuptial gifts than those where males search. Our model and results show that search roles can reverse when multiple mating brings about sufficiently strong material benefits to females.

  1. The Use of Binary Search Trees in External Distribution Sorting.

    ERIC Educational Resources Information Center

    Cooper, David; Lynch, Michael F.

    1984-01-01

    Suggests new method of external distribution called tree partitioning that involves use of binary tree to split incoming file into successively smaller partitions for internal sorting. Number of disc accesses during a tree-partitioning sort were calculated in simulation using files extracted from British National Bibliography catalog files. (19…

  2. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  3. Optimal Analyses for 3×n AB Games in the Worst Case

    NASA Astrophysics Data System (ADS)

    Huang, Li-Te; Lin, Shun-Shii

    The past decades have witnessed a growing interest in research on deductive games such as Mastermind and AB game. Because of the complicated behavior of deductive games, tree-search approaches are often adopted to find their optimal strategies. In this paper, a generalized version of deductive games, called 3×n AB games, is introduced. However, traditional tree-search approaches are not appropriate for solving this problem since it can only solve instances with smaller n. For larger values of n, a systematic approach is necessary. Therefore, intensive analyses of playing 3×n AB games in the worst case optimally are conducted and a sophisticated method, called structural reduction, which aims at explaining the worst situation in this game is developed in the study. Furthermore, a worthwhile formula for calculating the optimal numbers of guesses required for arbitrary values of n is derived and proven to be final.

  4. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  5. An improved design method based on polyphase components for digital FIR filters

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No

    2017-11-01

    This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.

  6. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  7. Probe classification of on-off type DNA microarray images with a nonlinear matching measure

    NASA Astrophysics Data System (ADS)

    Ryu, Munho; Kim, Jong Dae; Min, Byoung Goo; Kim, Jongwon; Kim, Y. Y.

    2006-01-01

    We propose a nonlinear matching measure, called counting measure, as a signal detection measure that is defined as the number of on pixels in the spot area. It is applied to classify probes for an on-off type DNA microarray, where each probe spot is classified as hybridized or not. The counting measure also incorporates the maximum response search method, where the expected signal is obtained by taking the maximum among the measured responses of the various positions and sizes of the spot template. The counting measure was compared to existing signal detection measures such as the normalized covariance and the median for 2390 patient samples tested on the human papillomavirus (HPV) DNA chip. The counting measure performed the best regardless of whether or not the maximum response search method was used. The experimental results showed that the counting measure combined with the positional search was the most preferable.

  8. Conceptual search in electronic patient record.

    PubMed

    Baud, R H; Lovis, C; Ruch, P; Rassinoux, A M

    2001-01-01

    Search by content in a large corpus of free texts in the medical domain is, today, only partially solved. The so-called GREP approach (Get Regular Expression and Print), based on highly efficient string matching techniques, is subject to inherent limitations, especially its inability to recognize domain specific knowledge. Such methods oblige the user to formulate his or her query in a logical Boolean style; if this constraint is not fulfilled, the results are poor. The authors present an enhancement to string matching search by the addition of a light conceptual model behind the word lexicon. The new system accepts any sentence as a query and radically improves the quality of results. Efficiency regarding execution time is obtained at the expense of implementing advanced indexing algorithms in a pre-processing phase. The method is described and commented and a brief account of the results illustrates this paper.

  9. Exploring personalized searches using tag-based user profiles and resource profiles in folksonomy.

    PubMed

    Cai, Yi; Li, Qing; Xie, Haoran; Min, Huaqin

    2014-10-01

    With the increase in resource-sharing websites such as YouTube and Flickr, many shared resources have arisen on the Web. Personalized searches have become more important and challenging since users demand higher retrieval quality. To achieve this goal, personalized searches need to take users' personalized profiles and information needs into consideration. Collaborative tagging (also known as folksonomy) systems allow users to annotate resources with their own tags, which provides a simple but powerful way for organizing, retrieving and sharing different types of social resources. In this article, we examine the limitations of previous tag-based personalized searches. To handle these limitations, we propose a new method to model user profiles and resource profiles in collaborative tagging systems. We use a normalized term frequency to indicate the preference degree of a user on a tag. A novel search method using such profiles of users and resources is proposed to facilitate the desired personalization in resource searches. In our framework, instead of the keyword matching or similarity measurement used in previous works, the relevance measurement between a resource and a user query (termed the query relevance) is treated as a fuzzy satisfaction problem of a user's query requirements. We implement a prototype system called the Folksonomy-based Multimedia Retrieval System (FMRS). Experiments using the FMRS data set and the MovieLens data set show that our proposed method outperforms baseline methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Adaptive infinite impulse response system identification using modified-interior search algorithm with Lèvy flight.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar; Aggarwal, Apoorva

    2017-03-01

    In this paper, a new meta-heuristic optimization technique, called interior search algorithm (ISA) with Lèvy flight is proposed and applied to determine the optimal parameters of an unknown infinite impulse response (IIR) system for the system identification problem. ISA is based on aesthetics, which is commonly used in interior design and decoration processes. In ISA, composition phase and mirror phase are applied for addressing the nonlinear and multimodal system identification problems. System identification using modified-ISA (M-ISA) based method involves faster convergence, single parameter tuning and does not require derivative information because it uses a stochastic random search using the concepts of Lèvy flight. A proper tuning of control parameter has been performed in order to achieve a balance between intensification and diversification phases. In order to evaluate the performance of the proposed method, mean square error (MSE), computation time and percentage improvement are considered as the performance measure. To validate the performance of M-ISA based method, simulations has been carried out for three benchmarked IIR systems using same order and reduced order system. Genetic algorithm (GA), particle swarm optimization (PSO), cat swarm optimization (CSO), cuckoo search algorithm (CSA), differential evolution using wavelet mutation (DEWM), firefly algorithm (FFA), craziness based particle swarm optimization (CRPSO), harmony search (HS) algorithm, opposition based harmony search (OHS) algorithm, hybrid particle swarm optimization-gravitational search algorithm (HPSO-GSA) and ISA are also used to model the same examples and simulation results are compared. Obtained results confirm the efficiency of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.

  12. The Day Search Stood Still

    ERIC Educational Resources Information Center

    Sexton, Will

    2010-01-01

    That little rectangle with a button next to it? (Those things called search boxes but might just as well be called "resource drains.") Imagine it disappearing from a library's webpages. The intricate works behind these design elements make up a major portion of what library staff spends time and money developing, populating, supporting,…

  13. Searching for life in the Universe: unconventional methods for an unconventional problem.

    PubMed

    Nealson, K H; Tsapin, A; Storrie-Lombardi, M

    2002-12-01

    The search for life, on and off our planet, can be done by conventional methods with which we are all familiar. These methods are sensitive and specific, and are often capable of detecting even single cells. However, if the search broadens to include life that may be different (even subtly different) in composition, the methods and even the approach must be altered. Here we discuss the development of what we call non-earthcentric life detection--detecting life with methods that could detect life no matter what its form or composition. To develop these methods, we simply ask, can we define life in terms of its general properties and particularly those that can be measured and quantified? Taking such an approach we can search for life using physics and chemistry to ask questions about structure, chemical composition, thermodynamics, and kinetics. Structural complexity can be searched for using computer algorithms that recognize complex structures. Once identified, these structures can be examined for a variety of chemical traits, including elemental composition, chirality, and complex chemistry. A second approach involves defining our environment in terms of energy sources (i.e., reductants), and oxidants (e.g. what is available to eat and breathe), and then looking for areas in which such phenomena are inexplicably out of chemical equilibrium. These disequilibria, when found, can then be examined in detail for the presence of the structural and chemical complexity that presumably characterizes any living systems. By this approach, we move the search for life to one that should facilitate the detection of any earthly life it encountered, as well as any non-conventional life forms that have structure, complex chemistry, and live via some form of redox chemistry.

  14. Critical Information Literacy beyond the University: Lessons from Service in a Women's Health Interest Group

    ERIC Educational Resources Information Center

    Fountain, Kathleen Carlisle

    2013-01-01

    Library instruction methods most frequently focus on teaching students searching skills to navigate the maze of library databases to locate appropriate research materials. The current theory of critical information literacy instruction calls on librarians to spend more of their time in the classroom focused on understanding the social, political,…

  15. A Memetic Algorithm for Global Optimization of Multimodal Nonseparable Problems.

    PubMed

    Zhang, Geng; Li, Yangmin

    2016-06-01

    It is a big challenging issue of avoiding falling into local optimum especially when facing high-dimensional nonseparable problems where the interdependencies among vector elements are unknown. In order to improve the performance of optimization algorithm, a novel memetic algorithm (MA) called cooperative particle swarm optimizer-modified harmony search (CPSO-MHS) is proposed in this paper, where the CPSO is used for local search and the MHS for global search. The CPSO, as a local search method, uses 1-D swarm to search each dimension separately and thus converges fast. Besides, it can obtain global optimum elements according to our experimental results and analyses. MHS implements the global search by recombining different vector elements and extracting global optimum elements. The interaction between local search and global search creates a set of local search zones, where global optimum elements reside within the search space. The CPSO-MHS algorithm is tested and compared with seven other optimization algorithms on a set of 28 standard benchmarks. Meanwhile, some MAs are also compared according to the results derived directly from their corresponding references. The experimental results demonstrate a good performance of the proposed CPSO-MHS algorithm in solving multimodal nonseparable problems.

  16. Accelerated Profile HMM Searches

    PubMed Central

    Eddy, Sean R.

    2011-01-01

    Profile hidden Markov models (profile HMMs) and probabilistic inference methods have made important contributions to the theory of sequence database homology search. However, practical use of profile HMM methods has been hindered by the computational expense of existing software implementations. Here I describe an acceleration heuristic for profile HMMs, the “multiple segment Viterbi” (MSV) algorithm. The MSV algorithm computes an optimal sum of multiple ungapped local alignment segments using a striped vector-parallel approach previously described for fast Smith/Waterman alignment. MSV scores follow the same statistical distribution as gapped optimal local alignment scores, allowing rapid evaluation of significance of an MSV score and thus facilitating its use as a heuristic filter. I also describe a 20-fold acceleration of the standard profile HMM Forward/Backward algorithms using a method I call “sparse rescaling”. These methods are assembled in a pipeline in which high-scoring MSV hits are passed on for reanalysis with the full HMM Forward/Backward algorithm. This accelerated pipeline is implemented in the freely available HMMER3 software package. Performance benchmarks show that the use of the heuristic MSV filter sacrifices negligible sensitivity compared to unaccelerated profile HMM searches. HMMER3 is substantially more sensitive and 100- to 1000-fold faster than HMMER2. HMMER3 is now about as fast as BLAST for protein searches. PMID:22039361

  17. Electronic Document Management Using Inverted Files System

    NASA Astrophysics Data System (ADS)

    Suhartono, Derwin; Setiawan, Erwin; Irwanto, Djon

    2014-03-01

    The amount of documents increases so fast. Those documents exist not only in a paper based but also in an electronic based. It can be seen from the data sample taken by the SpringerLink publisher in 2010, which showed an increase in the number of digital document collections from 2003 to mid of 2010. Then, how to manage them well becomes an important need. This paper describes a new method in managing documents called as inverted files system. Related with the electronic based document, the inverted files system will closely used in term of its usage to document so that it can be searched over the Internet using the Search Engine. It can improve document search mechanism and document save mechanism.

  18. Acceleration of saddle-point searches with machine learning.

    PubMed

    Peterson, Andrew A

    2016-08-21

    In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the number of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.

  19. Acceleration of saddle-point searches with machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Andrew A., E-mail: andrew-peterson@brown.edu

    In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the numbermore » of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.« less

  20. Tracking and tracing of participants in two large cancer screening trials.

    PubMed

    Marcus, Pamela M; Childs, Jeffery; Gahagan, Betsy; Gren, Lisa H

    2012-07-01

    Many clinical trials rely on participant report to first learn about study events. It is therefore important to have current contact information and the ability to locate participants should information become outdated. The Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO) and the Lung Screening Study (LSS) component of the National Lung Screening Trial, two large randomized cancer screening trials, enrolled almost 190,000 participants on whom annual contact was necessary. Ten screening centers participated in both trials. Centers developed methods to track participants and trace them when necessary. We describe the methods used to keep track of participants and trace them when lost, and the extent to which each method was used. Screening center coordinators were asked, using a self-administered paper questionnaire, to rate the extent to which specific tracking and tracing methods were used. Many methods were used by the screening centers, including telephone calls, mail, and internet searches. The most extensively used methods involved telephoning the participant on his or her home or cell phone, or telephoning a person identified by the participant as someone who would know about the participant's whereabouts. Internet searches were used extensively as well; these included searches on names, reverse-lookup searches (on addresses or telephone numbers) and searches of the Social Security Death Index. Over time, the percentage of participants requiring tracing decreased. Telephone communication and internet services were useful in keeping track of PLCO and LSS participants and tracing them when contact information was no longer valid. Published by Elsevier Inc.

  1. Hunting for a headhunter. How to select a physician search firm.

    PubMed

    Walker, R

    1989-05-01

    Many healthcare facilities in search of a physician are bombarded with offers from physician search firms to drum up potential candidates. Determining which firm has the right stuff for the job takes considerable time and skill. More than 60 companies belong to the National Association of Physician Recruiters, and their methods, policies, and results may vary widely. Administrators can begin getting basic information by contacting firms and requesting written material. During the initial telephone call, the administrator in charge of the search should speak with a consultant or principal of the firm (whoever would be doing the search) and find out what experience that person has had with searches for facilities in similar geographical areas, his or her success in placing physicians who specialize in the specialty needed, how many searches the consultant undertakes at one time, whether the firm guarantees its services, and an outline of its fee structure. After evaluating written material, the administrator should choose two or three search firms to make personal presentations. These presentations should follow a logical sequence and include statistics, completion times, ratios, and specific deadlines for various parts of the search process.

  2. Research Trend Visualization by MeSH Terms from PubMed.

    PubMed

    Yang, Heyoung; Lee, Hyuck Jai

    2018-05-30

    Motivation : PubMed is a primary source of biomedical information comprising search tool function and the biomedical literature from MEDLINE which is the US National Library of Medicine premier bibliographic database, life science journals and online books. Complimentary tools to PubMed have been developed to help the users search for literature and acquire knowledge. However, these tools are insufficient to overcome the difficulties of the users due to the proliferation of biomedical literature. A new method is needed for searching the knowledge in biomedical field. Methods : A new method is proposed in this study for visualizing the recent research trends based on the retrieved documents corresponding to a search query given by the user. The Medical Subject Headings (MeSH) are used as the primary analytical element. MeSH terms are extracted from the literature and the correlations between them are calculated. A MeSH network, called MeSH Net, is generated as the final result based on the Pathfinder Network algorithm. Results : A case study for the verification of proposed method was carried out on a research area defined by the search query (immunotherapy and cancer and "tumor microenvironment"). The MeSH Net generated by the method is in good agreement with the actual research activities in the research area (immunotherapy). Conclusion : A prototype application generating MeSH Net was developed. The application, which could be used as a "guide map for travelers", allows the users to quickly and easily acquire the knowledge of research trends. Combination of PubMed and MeSH Net is expected to be an effective complementary system for the researchers in biomedical field experiencing difficulties with search and information analysis.

  3. Qualitative Insights from a Canadian Multi-Institutional Research Study: In Search of Meaningful E-Learning

    ERIC Educational Resources Information Center

    Carter, Lorraine M.; Salyers, Vince; Myers, Sue; Hipfner, Carol; Hoffart, Caroline; MacLean, Christa; White, Kathy; Matus, Theresa; Forssman, Vivian; Barrett, Penelope

    2014-01-01

    This paper reports the qualitative findings of a mixed methods research study conducted at three Canadian post-secondary institutions. Called the Meaningful E-learning or MEL project, the study was an exploration of the teaching and learning experiences of faculty and students as well as their perceptions of the benefits and challenges of…

  4. Ant Colony Optimization With Local Search for Dynamic Traveling Salesman Problems.

    PubMed

    Mavrovouniotis, Michalis; Muller, Felipe M; Yang, Shengxiang

    2016-06-13

    For a dynamic traveling salesman problem (DTSP), the weights (or traveling times) between two cities (or nodes) may be subject to changes. Ant colony optimization (ACO) algorithms have proved to be powerful methods to tackle such problems due to their adaptation capabilities. It has been shown that the integration of local search operators can significantly improve the performance of ACO. In this paper, a memetic ACO algorithm, where a local search operator (called unstring and string) is integrated into ACO, is proposed to address DTSPs. The best solution from ACO is passed to the local search operator, which removes and inserts cities in such a way that improves the solution quality. The proposed memetic ACO algorithm is designed to address both symmetric and asymmetric DTSPs. The experimental results show the efficiency of the proposed memetic algorithm for addressing DTSPs in comparison with other state-of-the-art algorithms.

  5. Standardization of Keyword Search Mode

    ERIC Educational Resources Information Center

    Su, Di

    2010-01-01

    In spite of its popularity, keyword search mode has not been standardized. Though information professionals are quick to adapt to various presentations of keyword search mode, novice end-users may find keyword search confusing. This article compares keyword search mode in some major reference databases and calls for standardization. (Contains 3…

  6. Development and Validation of the Calling and Vocation Questionnaire (CVQ) and Brief Calling Scale (BCS)

    ERIC Educational Resources Information Center

    Dik, Bryan J.; Eldridge, Brandy M.; Steger, Michael F.; Duffy, Ryan D.

    2012-01-01

    Research on work as a calling is limited by measurement concerns. In response, the authors introduce the multidimensional Calling and Vocation Questionnaire (CVQ) and the Brief Calling scale (BCS), instruments assessing presence of, and search for, a calling. Study 1 describes CVQ development using exploratory and confirmatory factor analysis…

  7. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed Central

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    OBJECTIVE: Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. DESIGN: This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. RESULTS: The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. CONCLUSIONS: This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources. PMID:8581554

  8. Reliability-based design optimization of reinforced concrete structures including soil-structure interaction using a discrete gravitational search algorithm and a proposed metamodel

    NASA Astrophysics Data System (ADS)

    Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.

    2013-10-01

    A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.

  9. SlideSort: all pairs similarity search for short reads

    PubMed Central

    Shimizu, Kana; Tsuda, Koji

    2011-01-01

    Motivation: Recent progress in DNA sequencing technologies calls for fast and accurate algorithms that can evaluate sequence similarity for a huge amount of short reads. Searching similar pairs from a string pool is a fundamental process of de novo genome assembly, genome-wide alignment and other important analyses. Results: In this study, we designed and implemented an exact algorithm SlideSort that finds all similar pairs from a string pool in terms of edit distance. Using an efficient pattern growth algorithm, SlideSort discovers chains of common k-mers to narrow down the search. Compared to existing methods based on single k-mers, our method is more effective in reducing the number of edit distance calculations. In comparison to backtracking methods such as BWA, our method is much faster in finding remote matches, scaling easily to tens of millions of sequences. Our software has an additional function of single link clustering, which is useful in summarizing short reads for further processing. Availability: Executable binary files and C++ libraries are available at http://www.cbrc.jp/~shimizu/slidesort/ for Linux and Windows. Contact: slidesort@m.aist.go.jp; shimizu-kana@aist.go.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21148542

  10. Heuristics in Problem Solving: The Role of Direction in Controlling Search Space

    ERIC Educational Resources Information Center

    Chu, Yun; Li, Zheng; Su, Yong; Pizlo, Zygmunt

    2010-01-01

    Isomorphs of a puzzle called m+m resulted in faster solution times and an easily reproduced solution path in a labeled version of the problem compared to a more difficult binary version. We conjecture that performance is related to a type of heuristic called direction that not only constrains search space in the labeled version, but also…

  11. Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering.

    PubMed

    Slepoy, A; Peters, M D; Thompson, A P

    2007-11-30

    Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.

  12. Sterile Neutrino Search with the Double Chooz Experiment

    NASA Astrophysics Data System (ADS)

    Hellwig, D.; Matsubara, T.; Double Chooz Collaboration

    2017-09-01

    Double Chooz is a reactor antineutrino disappearance experiment located in Chooz, France. A far detector at a distance of about 1 km from reactor cores is operating since 2011; a near detector of identical design at a distance of about 400 m is operating since begin 2015. Beyond the precise measurement of θ 13, Double Chooz has a strong sensitivity to so called light sterile neutrinos. Sterile neutrinos are neutrino mass states not taking part in weak interactions, but may mix with known neutrino states. In this paper, we present an analysis method to search for sterile neutrinos and the expected sensitivity with the baselines of our detectors.

  13. The Effect of Drama-Based Pedagogy on PreK-16 Outcomes: A Meta-Analysis of Research from 1985 to 2012

    ERIC Educational Resources Information Center

    Lee, Bridget Kiger; Patall, Erika A.; Cawthon, Stephanie W.; Steingut, Rebecca R.

    2015-01-01

    The President's Committee on the Arts and Humanities report heartily supported arts integration. However, the President's Committee called for a better understanding of the dimensions of quality and best practices. One promising arts integration method is drama-based pedagogy (DBP). A comprehensive search of the literature revealed 47…

  14. Mixed methods systematic review exploring mentorship outcomes in nursing academia.

    PubMed

    Nowell, Lorelli; Norris, Jill M; Mrklas, Kelly; White, Deborah E

    2017-03-01

    The aim of this study was to report on a mixed methods systematic review that critically examines the evidence for mentorship in nursing academia. Nursing education institutions globally have issued calls for mentorship. There is emerging evidence to support the value of mentorship in other disciplines, but the extant state of the evidence in nursing academia is not known. A comprehensive review of the evidence is required. A mixed methods systematic review. Five databases (MEDLINE, CINAHL, EMBASE, ERIC, PsycINFO) were searched using an a priori search strategy from inception to 2 November 2015 to identify quantitative, qualitative and mixed methods studies. Grey literature searches were also conducted in electronic databases (ProQuest Dissertations and Theses, Index to Theses) and mentorship conference proceedings and by hand searching the reference lists of eligible studies. Study quality was assessed prior to inclusion using standardized critical appraisal instruments from the Joanna Briggs Institute. A convergent qualitative synthesis design was used where results from qualitative, quantitative and mixed methods studies were transformed into qualitative findings. Mentorship outcomes were mapped to a theory-informed framework. Thirty-four studies were included in this review, from the 3001 records initially retrieved. In general, mentorship had a positive impact on behavioural, career, attitudinal, relational and motivational outcomes; however, the methodological quality of studies was weak. This review can inform the objectives of mentorship interventions and contribute to a more rigorous approach to studies that assess mentorship outcomes. © 2016 John Wiley & Sons Ltd.

  15. "Through White Man's Eyes": Beatrice Culleton Mosionier's "In Search of April Raintree" and Reading for Decolonization

    ERIC Educational Resources Information Center

    Hanson, Aubrey Jean

    2012-01-01

    "In Search of April Raintree" by Beatrice Culleton Mosionier is a text that continues, over twenty-five years after its initial publication, to call its readers to reflect on racism in Canada and beyond. It is precisely this call that must incite readers also to exercise a vigilant critical consciousness and to seek out spaces in the…

  16. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm

    PubMed Central

    Shareef, Hussain; Mohamed, Azah

    2017-01-01

    The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method. PMID:29220396

  17. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm.

    PubMed

    Islam, Md Mainul; Shareef, Hussain; Mohamed, Azah

    2017-01-01

    The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method.

  18. Assembler: Efficient Discovery of Spatial Co-evolving Patterns in Massive Geo-sensory Data.

    PubMed

    Zhang, Chao; Zheng, Yu; Ma, Xiuli; Han, Jiawei

    2015-08-01

    Recent years have witnessed the wide proliferation of geo-sensory applications wherein a bundle of sensors are deployed at different locations to cooperatively monitor the target condition. Given massive geo-sensory data, we study the problem of mining spatial co-evolving patterns (SCPs), i.e ., groups of sensors that are spatially correlated and co-evolve frequently in their readings. SCP mining is of great importance to various real-world applications, yet it is challenging because (1) the truly interesting evolutions are often flooded by numerous trivial fluctuations in the geo-sensory time series; and (2) the pattern search space is extremely large due to the spatiotemporal combinatorial nature of SCP. In this paper, we propose a two-stage method called Assembler. In the first stage, Assembler filters trivial fluctuations using wavelet transform and detects frequent evolutions for individual sensors via a segment-and-group approach. In the second stage, Assembler generates SCPs by assembling the frequent evolutions of individual sensors. Leveraging the spatial constraint, it conceptually organizes all the SCPs into a novel structure called the SCP search tree, which facilitates the effective pruning of the search space to generate SCPs efficiently. Our experiments on both real and synthetic data sets show that Assembler is effective, efficient, and scalable.

  19. The effect of working on-call on stress physiology and sleep: A systematic review.

    PubMed

    Hall, Sarah J; Ferguson, Sally A; Turner, Anne I; Robertson, Samuel J; Vincent, Grace E; Aisbett, Brad

    2017-06-01

    On-call work is becoming an increasingly common work pattern, yet the human impacts of this type of work are not well established. Given the likelihood of calls to occur outside regular work hours, it is important to consider the potential impact of working on-call on stress physiology and sleep. The aims of this review were to collate and evaluate evidence on the effects of working on-call from home on stress physiology and sleep. A systematic search of Ebsco Host, Embase, Web of Science, Scopus and ScienceDirect was conducted. Search terms included: on-call, on call, standby, sleep, cortisol, heart rate, adrenaline, noradrenaline, nor-adrenaline, epinephrine, norepinephrine, nor-epinephrine, salivary alpha amylase and alpha amylase. Eight studies met the inclusion criteria, with only one study investigating the effect of working on-call from home on stress physiology. All eight studies investigated the effect of working on-call from home on sleep. Working on-call from home appears to adversely affect sleep quantity, and in most cases, sleep quality. However, studies did not differentiate between night's on-call from home with and without calls. Data examining the effect of working on-call from home on stress physiology were not sufficient to draw meaningful conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Efficient Kriging via Fast Matrix-Vector Products

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Raykar, Vikas C.; Duraiswami, Ramani; Mount, David M.

    2008-01-01

    Interpolating scattered data points is a problem of wide ranging interest. Ordinary kriging is an optimal scattered data estimator, widely used in geosciences and remote sensing. A generalized version of this technique, called cokriging, can be used for image fusion of remotely sensed data. However, it is computationally very expensive for large data sets. We demonstrate the time efficiency and accuracy of approximating ordinary kriging through the use of fast matrixvector products combined with iterative methods. We used methods based on the fast Multipole methods and nearest neighbor searching techniques for implementations of the fast matrix-vector products.

  1. Development of a two-stage gene selection method that incorporates a novel hybrid approach using the cuckoo optimization algorithm and harmony search for cancer classification.

    PubMed

    Elyasigomari, V; Lee, D A; Screen, H R C; Shaheed, M H

    2017-03-01

    For each cancer type, only a few genes are informative. Due to the so-called 'curse of dimensionality' problem, the gene selection task remains a challenge. To overcome this problem, we propose a two-stage gene selection method called MRMR-COA-HS. In the first stage, the minimum redundancy and maximum relevance (MRMR) feature selection is used to select a subset of relevant genes. The selected genes are then fed into a wrapper setup that combines a new algorithm, COA-HS, using the support vector machine as a classifier. The method was applied to four microarray datasets, and the performance was assessed by the leave one out cross-validation method. Comparative performance assessment of the proposed method with other evolutionary algorithms suggested that the proposed algorithm significantly outperforms other methods in selecting a fewer number of genes while maintaining the highest classification accuracy. The functions of the selected genes were further investigated, and it was confirmed that the selected genes are biologically relevant to each cancer type. Copyright © 2017. Published by Elsevier Inc.

  2. An Application of the A* Search to Trajectory Optimization

    DTIC Science & Technology

    1990-05-11

    linearized model of orbital motion called the Clohessy - Wiltshire Equations and a node search technique called A*. The planner discussed in this thesis starts...states while transfer time is left unspecified. 13 Chapter 2. Background HILL’S ( CLOHESSY - WILTSHIRE ) EQUATIONS The Euler-Hill equations describe... Clohessy - Wiltshire equations. The coordinate system used in this thesis is commonly referred to as Local Vertical, Local Horizontal or LVLH reference frame

  3. Intra-Operative Dosimetry in Prostate Brachytherapy

    DTIC Science & Technology

    2007-11-01

    of the focal spot. 2.1. Model for Reconstruction Space Transformation As illustrated in Figure 8, let A & B ( with reference frames FA & FB) be the two...simplex optimization method in MATLAB 7.0 with the search space being defined by the distortion modes from PCA. A linear combination of the modes would...arm is tracked with an X-ray fiducial system called FTRAC that is composed of optimally selected polynomial

  4. 2D photonic crystal complete band gap search using a cyclic cellular automaton refination

    NASA Astrophysics Data System (ADS)

    González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.

    2014-11-01

    We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.

  5. Evolutionary pattern search algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimentalmore » analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.« less

  6. Honey Bees Inspired Optimization Method: The Bees Algorithm.

    PubMed

    Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo

    2013-11-06

    Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem.

  7. When Gravity Fails: Local Search Topology

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Cheeseman, Peter; Stutz, John; Lau, Sonie (Technical Monitor)

    1997-01-01

    Local search algorithms for combinatorial search problems frequently encounter a sequence of states in which it is impossible to improve the value of the objective function; moves through these regions, called {\\em plateau moves), dominate the time spent in local search. We analyze and characterize {\\em plateaus) for three different classes of randomly generated Boolean Satisfiability problems. We identify several interesting features of plateaus that impact the performance of local search algorithms. We show that local minima tend to be small but occasionally may be very large. We also show that local minima can be escaped without unsatisfying a large number of clauses, but that systematically searching for an escape route may be computationally expensive if the local minimum is large. We show that plateaus with exits, called benches, tend to be much larger than minima, and that some benches have very few exit states which local search can use to escape. We show that the solutions (i.e. global minima) of randomly generated problem instances form clusters, which behave similarly to local minima. We revisit several enhancements of local search algorithms and explain their performance in light of our results. Finally we discuss strategies for creating the next generation of local search algorithms.

  8. Spatio-Temporal Dynamics of Field Cricket Calling Behaviour: Implications for Female Mate Search and Mate Choice.

    PubMed

    Nandi, Diptarup; Balakrishnan, Rohini

    2016-01-01

    Amount of calling activity (calling effort) is a strong determinant of male mating success in species such as orthopterans and anurans that use acoustic communication in the context of mating behaviour. While many studies in crickets have investigated the determinants of calling effort, patterns of variability in male calling effort in natural choruses remain largely unexplored. Within-individual variability in calling activity across multiple nights of calling can influence female mate search and mate choice strategies. Moreover, calling site fidelity across multiple nights of calling can also affect the female mate sampling strategy. We therefore investigated the spatio-temporal dynamics of acoustic signaling behaviour in a wild population of the field cricket species Plebeiogryllus guttiventris. We first studied the consistency of calling activity by quantifying variation in male calling effort across multiple nights of calling using repeatability analysis. Callers were inconsistent in their calling effort across nights and did not optimize nightly calling effort to increase their total number of nights spent calling. We also estimated calling site fidelity of males across multiple nights by quantifying movement of callers. Callers frequently changed their calling sites across calling nights with substantial displacement but without any significant directionality. Finally, we investigated trade-offs between within-night calling effort and energetically expensive calling song features such as call intensity and chirp rate. Calling effort was not correlated with any of the calling song features, suggesting that energetically expensive song features do not constrain male calling effort. The two key features of signaling behaviour, calling effort and call intensity, which determine the duration and spatial coverage of the sexual signal, are therefore uncorrelated and function independently.

  9. A Corporate Library's "Single Search Box" Solution

    ERIC Educational Resources Information Center

    Waldstein, Robert

    2013-01-01

    Alcatel-Lucent has had an internal library website called InfoView since 1993. They always had pages for the various diverse resources they maintained for Alcatel-Lucent employees, such as books, serials, artifacts, market reports, and discounts. Each page had two search boxes: one for a "site" search on all pages and one searching the…

  10. Clustering methods for the optimization of atomic cluster structure

    NASA Astrophysics Data System (ADS)

    Bagattini, Francesco; Schoen, Fabio; Tigli, Luca

    2018-04-01

    In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.

  11. Optimization of Online Searching by Pre-Recording the Search Statements: A Technique for the HP-2645A Terminal.

    ERIC Educational Resources Information Center

    Oberhauser, O. C.; Stebegg, K.

    1982-01-01

    Describes the terminal's capabilities, ways to store and call up lines of statements, cassette tapes needed during searches, and master tape's use for login storage. Advantages of the technique and two sources are listed. (RBF)

  12. Mining relational paths in integrated biomedical data.

    PubMed

    He, Bing; Tang, Jie; Ding, Ying; Wang, Huijun; Sun, Yuyin; Shin, Jae Hong; Chen, Bin; Moorthy, Ganesh; Qiu, Judy; Desai, Pankaj; Wild, David J

    2011-01-01

    Much life science and biology research requires an understanding of complex relationships between biological entities (genes, compounds, pathways, diseases, and so on). There is a wealth of data on such relationships in publicly available datasets and publications, but these sources are overlapped and distributed so that finding pertinent relational data is increasingly difficult. Whilst most public datasets have associated tools for searching, there is a lack of searching methods that can cross data sources and that in particular search not only based on the biological entities themselves but also on the relationships between them. In this paper, we demonstrate how graph-theoretic algorithms for mining relational paths can be used together with a previous integrative data resource we developed called Chem2Bio2RDF to extract new biological insights about the relationships between such entities. In particular, we use these methods to investigate the genetic basis of side-effects of thiazolinedione drugs, and in particular make a hypothesis for the recently discovered cardiac side-effects of Rosiglitazone (Avandia) and a prediction for Pioglitazone which is backed up by recent clinical studies.

  13. A synergetic combination of small and large neighborhood schemes in developing an effective procedure for solving the job shop scheduling problem.

    PubMed

    Amirghasemi, Mehrdad; Zamani, Reza

    2014-01-01

    This paper presents an effective procedure for solving the job shop problem. Synergistically combining small and large neighborhood schemes, the procedure consists of four components, namely (i) a construction method for generating semi-active schedules by a forward-backward mechanism, (ii) a local search for manipulating a small neighborhood structure guided by a tabu list, (iii) a feedback-based mechanism for perturbing the solutions generated, and (iv) a very large-neighborhood local search guided by a forward-backward shifting bottleneck method. The combination of shifting bottleneck mechanism and tabu list is used as a means of the manipulation of neighborhood structures, and the perturbation mechanism employed diversifies the search. A feedback mechanism, called repeat-check, detects consequent repeats and ignites a perturbation when the total number of consecutive repeats for two identical makespan values reaches a given threshold. The results of extensive computational experiments on the benchmark instances indicate that the combination of these four components is synergetic, in the sense that they collectively make the procedure fast and robust.

  14. Development and tuning of an original search engine for patent libraries in medicinal chemistry

    PubMed Central

    2014-01-01

    Background The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. Methods We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. Results The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. Conclusions We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks clearly increases the effectiveness of the system. We conclude that different search tasks demand different information retrieval engines' settings in order to yield optimal end-user retrieval. PMID:24564220

  15. Alpha-beta coordination method for collective search

    DOEpatents

    Goldsmith, Steven Y.

    2002-01-01

    The present invention comprises a decentralized coordination strategy called alpha-beta coordination. The alpha-beta coordination strategy is a family of collective search methods that allow teams of communicating agents to implicitly coordinate their search activities through a division of labor based on self-selected roles and self-determined status. An agent can play one of two complementary roles. An agent in the alpha role is motivated to improve its status by exploring new regions of the search space. An agent in the beta role is also motivated to improve its status, but is conservative and tends to remain aggregated with other agents until alpha agents have clearly identified and communicated better regions of the search space. An agent can select its role dynamically based on its current status value relative to the status values of neighboring team members. Status can be determined by a function of the agent's sensor readings, and can generally be a measurement of source intensity at the agent's current location. An agent's decision cycle can comprise three sequential decision rules: (1) selection of a current role based on the evaluation of the current status data, (2) selection of a specific subset of the current data, and (3) determination of the next heading using the selected data. Variations of the decision rules produce different versions of alpha and beta behaviors that lead to different collective behavior properties.

  16. Chemical and isotopic database of water and gas from hydrothermal systems with an emphasis for the western United States

    USGS Publications Warehouse

    Mariner, R.H.; Venezky, D.Y.; Hurwitz, S.

    2006-01-01

    Chemical and isotope data accumulated by two USGS Projects (led by I. Barnes and R. Mariner) over a time period of about 40 years can now be found using a basic web search or through an image search (left). The data are primarily chemical and isotopic analyses of waters (thermal, mineral, or fresh) and associated gas (free and/or dissolved) collected from hot springs, mineral springs, cold springs, geothermal wells, fumaroles, and gas seeps. Additional information is available about the collection methods and analysis procedures.The chemical and isotope data are stored in a MySQL database and accessed using PHP from a basic search form below. Data can also be accessed using an Open Source GIS called WorldKit by clicking on the image to the left. Additional information is available about WorldKit including the files used to set up the site.

  17. Addressing Data Analysis Challenges in Gravitational Wave Searches Using the Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Weerathunga, Thilina Shihan

    2017-08-01

    Gravitational waves are a fundamental prediction of Einstein's General Theory of Relativity. The first experimental proof of their existence was provided by the Nobel Prize winning discovery by Taylor and Hulse of orbital decay in a binary pulsar system. The first detection of gravitational waves incident on earth from an astrophysical source was announced in 2016 by the LIGO Scientific Collaboration, launching the new era of gravitational wave (GW) astronomy. The signal detected was from the merger of two black holes, which is an example of sources called Compact Binary Coalescences (CBCs). Data analysis strategies used in the search for CBC signals are derivatives of the Maximum-Likelihood (ML) method. The ML method applied to data from a network of geographically distributed GW detectors--called fully coherent network analysis--is currently the best approach for estimating source location and GW polarization waveforms. However, in the case of CBCs, especially for lower mass systems (O(1M solar masses)) such as double neutron star binaries, fully coherent network analysis is computationally expensive. The ML method requires locating the global maximum of the likelihood function over a nine dimensional parameter space, where the computation of the likelihood at each point requires correlations involving O(104) to O(106) samples between the data and the corresponding candidate signal waveform template. Approximations, such as semi-coherent coincidence searches, are currently used to circumvent the computational barrier but incur a concomitant loss in sensitivity. We explored the effectiveness of Particle Swarm Optimization (PSO), a well-known algorithm in the field of swarm intelligence, in addressing the fully coherent network analysis problem. As an example, we used a four-detector network consisting of the two LIGO detectors at Hanford and Livingston, Virgo and Kagra, all having initial LIGO noise power spectral densities, and show that PSO can locate the global maximum with less than 240,000 likelihood evaluations for a component mass range of 1.0 to 10.0 solar masses at a realistic coherent network signal to noise ratio of 9.0. Our results show that PSO can successfully deliver a fully-coherent all-sky search with < (1/10 ) the number of likelihood evaluations needed for a grid-based search. Used as a follow-up step, the savings in the number of likelihood evaluations may also reduce latency in obtaining ML estimates of source parameters in semi-coherent searches.

  18. GEsture: an online hand-drawing tool for gene expression pattern search.

    PubMed

    Wang, Chunyan; Xu, Yiqing; Wang, Xuelin; Zhang, Li; Wei, Suyun; Ye, Qiaolin; Zhu, Youxiang; Yin, Hengfu; Nainwal, Manoj; Tanon-Reyes, Luis; Cheng, Feng; Yin, Tongming; Ye, Ning

    2018-01-01

    Gene expression profiling data provide useful information for the investigation of biological function and process. However, identifying a specific expression pattern from extensive time series gene expression data is not an easy task. Clustering, a popular method, is often used to classify similar expression genes, however, genes with a 'desirable' or 'user-defined' pattern cannot be efficiently detected by clustering methods. To address these limitations, we developed an online tool called GEsture. Users can draw, or graph a curve using a mouse instead of inputting abstract parameters of clustering methods. GEsture explores genes showing similar, opposite and time-delay expression patterns with a gene expression curve as input from time series datasets. We presented three examples that illustrate the capacity of GEsture in gene hunting while following users' requirements. GEsture also provides visualization tools (such as expression pattern figure, heat map and correlation network) to display the searching results. The result outputs may provide useful information for researchers to understand the targets, function and biological processes of the involved genes.

  19. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  20. Finding Specification Pages from the Web

    NASA Astrophysics Data System (ADS)

    Yoshinaga, Naoki; Torisawa, Kentaro

    This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.

  1. Sensitivity curves for searches for gravitational-wave backgrounds

    NASA Astrophysics Data System (ADS)

    Thrane, Eric; Romano, Joseph D.

    2013-12-01

    We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs “power-law integrated curves.” For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves.

  2. Research on Agriculture Domain Meta-Search Engine System

    NASA Astrophysics Data System (ADS)

    Xie, Nengfu; Wang, Wensheng

    The rapid growth of agriculture web information brings a fact that search engine can not return a satisfied result for users’ queries. In this paper, we propose an agriculture domain search engine system, called ADSE, that can obtains results by an advance interface to several searches and aggregates them. We also discuss two key technologies: agriculture information determination and engine.

  3. Next-Gen Search Engines

    ERIC Educational Resources Information Center

    Gupta, Amardeep

    2005-01-01

    Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…

  4. An Atlas of Peroxiredoxins Created Using an Active Site Profile-Based Approach to Functionally Relevant Clustering of Proteins.

    PubMed

    Harper, Angela F; Leuthaeuser, Janelle B; Babbitt, Patricia C; Morris, John H; Ferrin, Thomas E; Poole, Leslie B; Fetrow, Jacquelyn S

    2017-02-01

    Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially-MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method's novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences.

  5. An evolutionary algorithm that constructs recurrent neural networks.

    PubMed

    Angeline, P J; Saunders, G M; Pollack, J B

    1994-01-01

    Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods.

  6. Efficient biprediction decision scheme for fast high efficiency video coding encoding

    NASA Astrophysics Data System (ADS)

    Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won

    2016-11-01

    An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.

  7. An exponentiation method for XML element retrieval.

    PubMed

    Wichaiwong, Tanakorn

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP.

  8. A Hybrid P2P Overlay Network for Non-strictly Hierarchically Categorized Content

    NASA Astrophysics Data System (ADS)

    Wan, Yi; Asaka, Takuya; Takahashi, Tatsuro

    In P2P content distribution systems, there are many cases in which the content can be classified into hierarchically organized categories. In this paper, we propose a hybrid overlay network design suitable for such content called Pastry/NSHCC (Pastry for Non-Strictly Hierarchically Categorized Content). The semantic information of classification hierarchies of the content can be utilized regardless of whether they are in a strict tree structure or not. By doing so, the search scope can be restrained to any granularity, and the number of query messages also decreases while maintaining keyword searching availability. Through simulation, we showed that the proposed method provides better performance and lower overhead than unstructured overlays exploiting the same semantic information.

  9. Cancer Internet Search Activity on a Major Search Engine, United States 2001-2003

    PubMed Central

    Cooper, Crystale Purvis; Mallon, Kenneth P; Leadbetter, Steven; Peipins, Lucy A

    2005-01-01

    Background To locate online health information, Internet users typically use a search engine, such as Yahoo! or Google. We studied Yahoo! search activity related to the 23 most common cancers in the United States. Objective The objective was to test three potential correlates of Yahoo! cancer search activity—estimated cancer incidence, estimated cancer mortality, and the volume of cancer news coverage—and to study the periodicity of and peaks in Yahoo! cancer search activity. Methods Yahoo! cancer search activity was obtained from a proprietary database called the Yahoo! Buzz Index. The American Cancer Society's estimates of cancer incidence and mortality were used. News reports associated with specific cancer types were identified using the LexisNexis “US News” database, which includes more than 400 national and regional newspapers and a variety of newswire services. Results The Yahoo! search activity associated with specific cancers correlated with their estimated incidence (Spearman rank correlation, ρ = 0.50, P = .015), estimated mortality (ρ = 0.66, P = .001), and volume of related news coverage (ρ = 0.88, P < .001). Yahoo! cancer search activity tended to be higher on weekdays and during national cancer awareness months but lower during summer months; cancer news coverage also tended to follow these trends. Sharp increases in Yahoo! search activity scores from one day to the next appeared to be associated with increases in relevant news coverage. Conclusions Media coverage appears to play a powerful role in prompting online searches for cancer information. Internet search activity offers an innovative tool for passive surveillance of health information–seeking behavior. PMID:15998627

  10. The Ground Flash Fraction Retrieval Algorithm Employing Differential Evolution: Simulations and Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard

    2012-01-01

    The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.

  11. Challenging Google, Microsoft Unveils a Search Tool for Scholarly Articles

    ERIC Educational Resources Information Center

    Carlson, Scott

    2006-01-01

    Microsoft has introduced a new search tool to help people find scholarly articles online. The service, which includes journal articles from prominent academic societies and publishers, puts Microsoft in direct competition with Google Scholar. The new free search tool, which should work on most Web browsers, is called Windows Live Academic Search…

  12. Optimization technique for problems with an inequality constraint

    NASA Technical Reports Server (NTRS)

    Russell, K. J.

    1972-01-01

    General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.

  13. Using Genetic Programming with Prior Formula Knowledge to Solve Symbolic Regression Problem.

    PubMed

    Lu, Qiang; Ren, Jun; Wang, Zhiguang

    2016-01-01

    A researcher can infer mathematical expressions of functions quickly by using his professional knowledge (called Prior Knowledge). But the results he finds may be biased and restricted to his research field due to limitation of his knowledge. In contrast, Genetic Programming method can discover fitted mathematical expressions from the huge search space through running evolutionary algorithms. And its results can be generalized to accommodate different fields of knowledge. However, since GP has to search a huge space, its speed of finding the results is rather slow. Therefore, in this paper, a framework of connection between Prior Formula Knowledge and GP (PFK-GP) is proposed to reduce the space of GP searching. The PFK is built based on the Deep Belief Network (DBN) which can identify candidate formulas that are consistent with the features of experimental data. By using these candidate formulas as the seed of a randomly generated population, PFK-GP finds the right formulas quickly by exploring the search space of data features. We have compared PFK-GP with Pareto GP on regression of eight benchmark problems. The experimental results confirm that the PFK-GP can reduce the search space and obtain the significant improvement in the quality of SR.

  14. Accelerating Information Retrieval from Profile Hidden Markov Model Databases.

    PubMed

    Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem

    2016-01-01

    Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

  15. Effectiveness of automated notification and customer service call centers for timely and accurate reporting of critical values: a laboratory medicine best practices systematic review and meta-analysis.

    PubMed

    Liebow, Edward B; Derzon, James H; Fontanesi, John; Favoretto, Alessandra M; Baetz, Rich Ann; Shaw, Colleen; Thompson, Pamela; Mass, Diana; Christenson, Robert; Epner, Paul; Snyder, Susan R

    2012-09-01

    To conduct a systematic review of the evidence available in support of automated notification methods and call centers and to acknowledge other considerations in making evidence-based recommendations for best practices in improving the timeliness and accuracy of critical value reporting. This review followed the Laboratory Medicine Best Practices (LMBP) review methods (Christenson, et al. 2011). A broad literature search and call for unpublished submissions returned 196 bibliographic records which were screened for eligibility. 41 studies were retrieved. Of these, 4 contained credible evidence for the timeliness and accuracy of automatic notification systems and 5 provided credible evidence for call centers for communicating critical value information in in-patient care settings. Studies reporting improvement from implementing automated notification findings report mean differences and were standardized using the standard difference in means (d=0.42; 95% CI=0.2-0.62) while studies reporting improvement from implementing call centers generally reported criterion referenced findings and were standardized using odds ratios (OR=22.1; 95% CI=17.1-28.6). The evidence, although suggestive, is not sufficient to make an LMBP recommendation for or against using automated notification systems as a best practice to improve the timeliness of critical value reporting in an in-patient care setting. Call centers, however, are effective in improving the timeliness of critical value reporting in an in-patient care setting, and meet LMBP criteria to be recommended as an "evidence-based best practice." Copyright © 2012 The Canadian Society of Clinical Chemists. All rights reserved.

  16. An Atlas of Peroxiredoxins Created Using an Active Site Profile-Based Approach to Functionally Relevant Clustering of Proteins

    PubMed Central

    Babbitt, Patricia C.; Ferrin, Thomas E.

    2017-01-01

    Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially—MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method’s novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences. PMID:28187133

  17. [Emergencies and continuous care: overload of the current on-call system and search for new models].

    PubMed

    Enríquez-Navascués, Jose M

    2008-04-01

    Emergency surgical care is still provided by means of an 24 hours physical presence "on-call" model (encompassing a normal day followed by "on call"), and is obligatory for all staff. This defective organisation of work has become unsustainable with the acceptance of the European 48 hours Directive, and is gruelling due to the excessive night work and feeling of being locked in that it entails. Emergency general and digestive system surgery care cannot be provided by a single organisational model, but has to be adapted to local circumstances. It is important to separate scheduled activity from urgent, and whereas increasingly more resources are dedicated to scheduled care, sufficient resources are also required for urgent activities, that cannot be considered as simply an "on call" or a fleeting stop in scheduled activity. Core subjects in residency, creating different levels of provision and activities, the analysis of urgent activity per work period and the identification of foreseeable activity, to maintain a pro-active mentality, and the disappearance of the "overtime" concept, should help provide another care model and method of remuneration.

  18. Calling, texting, and searching for information while riding a motorcycle: A study of university students in Vietnam.

    PubMed

    Truong, Long T; De Gruyter, Chris; Nguyen, Hang T T

    2017-08-18

    The objective of this study was to investigate the prevalence of calling, texting, and searching for information while riding a motorcycle among university students and the influences of sociodemographic characteristics, social norms, and risk perceptions on these behaviors. Students at 2 university campuses in Hanoi and Ho Chi Minh City, the 2 largest cities in Vietnam, were invited to participate in an anonymous online survey. Data collection was conducted during March and May 2016. There were 741 respondents, of whom nearly 90% of students (665) were motorcycle riders. Overall prevalence of mobile phone use while riding is 80.9% (95% confidence interval [CI], 77.9-83.9%) with calling having a higher level of prevalence than texting or searching for information while riding: 74% (95% CI, 70.7-77.3%) vs. 51.7% (95% CI, 47.9-55.5%) and 49.9% (95% CI, 46.1-53.7%), respectively. Random parameter ordered probit modeling results indicate that mobile phone use while riding is associated with gender, motorcycle license duration, perceived crash risk, perceived risk of mobile phone snatching, and perceptions of friends' mobile phone use while riding. Mobile phone use while riding a motorcycle is highly prevalent among university students. Educational programs should focus on the crash and economic risk of all types of mobile phone use while riding, including calling, texting, and searching for information. In addition, they should consider targeting the influence of social norms and peers on mobile phone use while riding.

  19. Informatics in radiology: RADTF: a semantic search-enabled, natural language processor-generated radiology teaching file.

    PubMed

    Do, Bao H; Wu, Andrew; Biswal, Sandip; Kamaya, Aya; Rubin, Daniel L

    2010-11-01

    Storing and retrieving radiology cases is an important activity for education and clinical research, but this process can be time-consuming. In the process of structuring reports and images into organized teaching files, incidental pathologic conditions not pertinent to the primary teaching point can be omitted, as when a user saves images of an aortic dissection case but disregards the incidental osteoid osteoma. An alternate strategy for identifying teaching cases is text search of reports in radiology information systems (RIS), but retrieved reports are unstructured, teaching-related content is not highlighted, and patient identifying information is not removed. Furthermore, searching unstructured reports requires sophisticated retrieval methods to achieve useful results. An open-source, RadLex(®)-compatible teaching file solution called RADTF, which uses natural language processing (NLP) methods to process radiology reports, was developed to create a searchable teaching resource from the RIS and the picture archiving and communication system (PACS). The NLP system extracts and de-identifies teaching-relevant statements from full reports to generate a stand-alone database, thus converting existing RIS archives into an on-demand source of teaching material. Using RADTF, the authors generated a semantic search-enabled, Web-based radiology archive containing over 700,000 cases with millions of images. RADTF combines a compact representation of the teaching-relevant content in radiology reports and a versatile search engine with the scale of the entire RIS-PACS collection of case material. ©RSNA, 2010

  20. The Salience of a Career Calling among College Students: Exploring Group Differences and Links to Religiousness, Life Meaning, and Life Satisfaction

    ERIC Educational Resources Information Center

    Duffy, Ryan D.; Sedlacek, William E.

    2010-01-01

    The authors examined the degree to which 1st-year college students endorse a career calling and how levels of calling differ across demographic variables and religiousness, life meaning, and life satisfaction. Forty-four percent of students believed that having a career calling was mostly or totally true of them, and 28% responded to searching for…

  1. For sale - multiple items | News

    Science.gov Websites

    newsletter Fermilab news Search Upcoming events May 27 Sun English Country Dancing Kuhn Barn 1:00 pm May 28 . $1200. Call 630-840-3499 Tagged: for sale Fermilab news Search Upcoming events May 27 Sun English

  2. Adaptive search in mobile peer-to-peer databases

    NASA Technical Reports Server (NTRS)

    Wolfson, Ouri (Inventor); Xu, Bo (Inventor)

    2010-01-01

    Information is stored in a plurality of mobile peers. The peers communicate in a peer to peer fashion, using a short-range wireless network. Occasionally, a peer initiates a search for information in the peer to peer network by issuing a query. Queries and pieces of information, called reports, are transmitted among peers that are within a transmission range. For each search additional peers are utilized, wherein these additional peers search and relay information on behalf of the originator of the search.

  3. A meta-heuristic method for solving scheduling problem: crow search algorithm

    NASA Astrophysics Data System (ADS)

    Adhi, Antono; Santosa, Budi; Siswanto, Nurhadi

    2018-04-01

    Scheduling is one of the most important processes in an industry both in manufacturingand services. The scheduling process is the process of selecting resources to perform an operation on tasks. Resources can be machines, peoples, tasks, jobs or operations.. The selection of optimum sequence of jobs from a permutation is an essential issue in every research in scheduling problem. Optimum sequence becomes optimum solution to resolve scheduling problem. Scheduling problem becomes NP-hard problem since the number of job in the sequence is more than normal number can be processed by exact algorithm. In order to obtain optimum results, it needs a method with capability to solve complex scheduling problems in an acceptable time. Meta-heuristic is a method usually used to solve scheduling problem. The recently published method called Crow Search Algorithm (CSA) is adopted in this research to solve scheduling problem. CSA is an evolutionary meta-heuristic method which is based on the behavior in flocks of crow. The calculation result of CSA for solving scheduling problem is compared with other algorithms. From the comparison, it is found that CSA has better performance in term of optimum solution and time calculation than other algorithms.

  4. LGscore: A method to identify disease-related genes using biological literature and Google data.

    PubMed

    Kim, Jeongwoo; Kim, Hyunjin; Yoon, Youngmi; Park, Sanghyun

    2015-04-01

    Since the genome project in 1990s, a number of studies associated with genes have been conducted and researchers have confirmed that genes are involved in disease. For this reason, the identification of the relationships between diseases and genes is important in biology. We propose a method called LGscore, which identifies disease-related genes using Google data and literature data. To implement this method, first, we construct a disease-related gene network using text-mining results. We then extract gene-gene interactions based on co-occurrences in abstract data obtained from PubMed, and calculate the weights of edges in the gene network by means of Z-scoring. The weights contain two values: the frequency and the Google search results. The frequency value is extracted from literature data, and the Google search result is obtained using Google. We assign a score to each gene through a network analysis. We assume that genes with a large number of links and numerous Google search results and frequency values are more likely to be involved in disease. For validation, we investigated the top 20 inferred genes for five different diseases using answer sets. The answer sets comprised six databases that contain information on disease-gene relationships. We identified a significant number of disease-related genes as well as candidate genes for Alzheimer's disease, diabetes, colon cancer, lung cancer, and prostate cancer. Our method was up to 40% more accurate than existing methods. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Enabling the extended compact genetic algorithm for real-parameter optimization by using adaptive discretization.

    PubMed

    Chen, Ying-ping; Chen, Chao-Hong

    2010-01-01

    An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.

  6. Estimating the location of baleen whale calls using dual streamers to support mitigation procedures in seismic reflection surveys.

    PubMed

    Abadi, Shima H; Tolstoy, Maya; Wilcock, William S D

    2017-01-01

    In order to mitigate against possible impacts of seismic surveys on baleen whales it is important to know as much as possible about the presence of whales within the vicinity of seismic operations. This study expands on previous work that analyzes single seismic streamer data to locate nearby calling baleen whales with a grid search method that utilizes the propagation angles and relative arrival times of received signals along the streamer. Three dimensional seismic reflection surveys use multiple towed hydrophone arrays for imaging the structure beneath the seafloor, providing an opportunity to significantly improve the uncertainty associated with streamer-generated call locations. All seismic surveys utilizing airguns conduct visual marine mammal monitoring surveys concurrent with the experiment, with powering-down of seismic source if a marine mammal is observed within the exposure zone. This study utilizes data from power-down periods of a seismic experiment conducted with two 8-km long seismic hydrophone arrays by the R/V Marcus G. Langseth near Alaska in summer 2011. Simulated and experiment data demonstrate that a single streamer can be utilized to resolve left-right ambiguity because the streamer is rarely perfectly straight in a field setting, but dual streamers provides significantly improved locations. Both methods represent a dramatic improvement over the existing Passive Acoustic Monitoring (PAM) system for detecting low frequency baleen whale calls, with ~60 calls detected utilizing the seismic streamers, zero of which were detected using the current R/V Langseth PAM system. Furthermore, this method has the potential to be utilized not only for improving mitigation processes, but also for studying baleen whale behavior within the vicinity of seismic operations.

  7. Estimating the location of baleen whale calls using dual streamers to support mitigation procedures in seismic reflection surveys

    PubMed Central

    Abadi, Shima H.; Tolstoy, Maya; Wilcock, William S. D.

    2017-01-01

    In order to mitigate against possible impacts of seismic surveys on baleen whales it is important to know as much as possible about the presence of whales within the vicinity of seismic operations. This study expands on previous work that analyzes single seismic streamer data to locate nearby calling baleen whales with a grid search method that utilizes the propagation angles and relative arrival times of received signals along the streamer. Three dimensional seismic reflection surveys use multiple towed hydrophone arrays for imaging the structure beneath the seafloor, providing an opportunity to significantly improve the uncertainty associated with streamer-generated call locations. All seismic surveys utilizing airguns conduct visual marine mammal monitoring surveys concurrent with the experiment, with powering-down of seismic source if a marine mammal is observed within the exposure zone. This study utilizes data from power-down periods of a seismic experiment conducted with two 8-km long seismic hydrophone arrays by the R/V Marcus G. Langseth near Alaska in summer 2011. Simulated and experiment data demonstrate that a single streamer can be utilized to resolve left-right ambiguity because the streamer is rarely perfectly straight in a field setting, but dual streamers provides significantly improved locations. Both methods represent a dramatic improvement over the existing Passive Acoustic Monitoring (PAM) system for detecting low frequency baleen whale calls, with ~60 calls detected utilizing the seismic streamers, zero of which were detected using the current R/V Langseth PAM system. Furthermore, this method has the potential to be utilized not only for improving mitigation processes, but also for studying baleen whale behavior within the vicinity of seismic operations. PMID:28199400

  8. Application of a fast skyline computation algorithm for serendipitous searching problems

    NASA Astrophysics Data System (ADS)

    Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary

    2018-02-01

    Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.

  9. Long-Term Priming of Visual Search Prevails against the Passage of Time and Counteracting Instructions

    ERIC Educational Resources Information Center

    Kruijne, Wouter; Meeter, Martijn

    2016-01-01

    Studies on "intertrial priming" have shown that in visual search experiments, the preceding trial automatically affects search performance: facilitating it when the target features repeat and giving rise to switch costs when they change--so-called (short-term) intertrial priming. These effects also occur at longer time scales: When 1 of…

  10. SearchGUI: An open-source graphical user interface for simultaneous OMSSA and X!Tandem searches.

    PubMed

    Vaudel, Marc; Barsnes, Harald; Berven, Frode S; Sickmann, Albert; Martens, Lennart

    2011-03-01

    The identification of proteins by mass spectrometry is a standard technique in the field of proteomics, relying on search engines to perform the identifications of the acquired spectra. Here, we present a user-friendly, lightweight and open-source graphical user interface called SearchGUI (http://searchgui.googlecode.com), for configuring and running the freely available OMSSA (open mass spectrometry search algorithm) and X!Tandem search engines simultaneously. Freely available under the permissible Apache2 license, SearchGUI is supported on Windows, Linux and OSX. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. CALL in the Year 2000: A Look Back from 2016

    ERIC Educational Resources Information Center

    Chapelle, Carol A.

    2016-01-01

    This commentary offers a brief reflection on the state of CALL in 1997, when "Language Learning & Technology" was launched with my paper entitled "CALL in the year 2000: Still in search of research paradigms?" The point of my 1997 paper was to suggest the potential value of research on second language learning for the study…

  12. The rid-redundant procedure in C-Prolog

    NASA Technical Reports Server (NTRS)

    Chen, Huo-Yan; Wah, Benjamin W.

    1987-01-01

    C-Prolog can conveniently be used for logical inferences on knowledge bases. However, as similar to many search methods using backward chaining, a large number of redundant computation may be produced in recursive calls. To overcome this problem, the 'rid-redundant' procedure was designed to rid all redundant computations in running multi-recursive procedures. Experimental results obtained for C-Prolog on the Vax 11/780 computer show that there is an order of magnitude improvement in the running time and solvable problem size.

  13. 78 FR 64930 - Open Forum on College Value and Affordability and College Ratings System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... about interpretation or translation services, please call 1-800-USA-LEARN (1- 800-872-5327) (TTY: 1-800... using the article search feature at: www.federalregister.gov . Specifically, through the advanced search...

  14. Bat detective-Deep learning tools for bat acoustic signal detection.

    PubMed

    Mac Aodha, Oisin; Gibb, Rory; Barlow, Kate E; Browning, Ella; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R; Newson, Stuart E; Pandourski, Ivan; Parsons, Stuart; Russ, Jon; Szodoray-Paradi, Abigel; Szodoray-Paradi, Farkas; Tilova, Elena; Girolami, Mark; Brostow, Gabriel; Jones, Kate E

    2018-03-01

    Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio.

  15. Bat detective—Deep learning tools for bat acoustic signal detection

    PubMed Central

    Barlow, Kate E.; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R.; Newson, Stuart E.; Pandourski, Ivan; Russ, Jon; Szodoray-Paradi, Abigel; Tilova, Elena; Girolami, Mark; Jones, Kate E.

    2018-01-01

    Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio. PMID:29518076

  16. Fast ancestral gene order reconstruction of genomes with unequal gene content.

    PubMed

    Feijão, Pedro; Araujo, Eloi

    2016-11-11

    During evolution, genomes are modified by large scale structural events, such as rearrangements, deletions or insertions of large blocks of DNA. Of particular interest, in order to better understand how this type of genomic evolution happens, is the reconstruction of ancestral genomes, given a phylogenetic tree with extant genomes at its leaves. One way of solving this problem is to assume a rearrangement model, such as Double Cut and Join (DCJ), and find a set of ancestral genomes that minimizes the number of events on the input tree. Since this problem is NP-hard for most rearrangement models, exact solutions are practical only for small instances, and heuristics have to be used for larger datasets. This type of approach can be called event-based. Another common approach is based on finding conserved structures between the input genomes, such as adjacencies between genes, possibly also assigning weights that indicate a measure of confidence or probability that this particular structure is present on each ancestral genome, and then finding a set of non conflicting adjacencies that optimize some given function, usually trying to maximize total weight and minimizing character changes in the tree. We call this type of methods homology-based. In previous work, we proposed an ancestral reconstruction method that combines homology- and event-based ideas, using the concept of intermediate genomes, that arise in DCJ rearrangement scenarios. This method showed better rate of correctly reconstructed adjacencies than other methods, while also being faster, since the use of intermediate genomes greatly reduces the search space. Here, we generalize the intermediate genome concept to genomes with unequal gene content, extending our method to account for gene insertions and deletions of any length. In many of the simulated datasets, our proposed method had better results than MLGO and MGRA, two state-of-the-art algorithms for ancestral reconstruction with unequal gene content, while running much faster, making it more scalable to larger datasets. Studing ancestral reconstruction problems under a new light, using the concept of intermediate genomes, allows the design of very fast algorithms by greatly reducing the solution search space, while also giving very good results. The algorithms introduced in this paper were implemented in an open-source software called RINGO (ancestral Reconstruction with INtermediate GenOmes), available at https://github.com/pedrofeijao/RINGO .

  17. An Exponentiation Method for XML Element Retrieval

    PubMed Central

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP. PMID:24696643

  18. Searching for rigour in the reporting of mixed methods population health research: a methodological review.

    PubMed

    Brown, K M; Elliott, S J; Leatherdale, S T; Robertson-Wilson, J

    2015-12-01

    The environments in which population health interventions occur shape both their implementation and outcomes. Hence, when evaluating these interventions, we must explore both intervention content and context. Mixed methods (integrating quantitative and qualitative methods) provide this opportunity. However, although criteria exist for establishing rigour in quantitative and qualitative research, there is poor consensus regarding rigour in mixed methods. Using the empirical example of school-based obesity interventions, this methodological review examined how mixed methods have been used and reported, and how rigour has been addressed. Twenty-three peer-reviewed mixed methods studies were identified through a systematic search of five databases and appraised using the guidelines for Good Reporting of a Mixed Methods Study. In general, more detailed description of data collection and analysis, integration, inferences and justifying the use of mixed methods is needed. Additionally, improved reporting of methodological rigour is required. This review calls for increased discussion of practical techniques for establishing rigour in mixed methods research, beyond those for quantitative and qualitative criteria individually. A guide for reporting mixed methods research in population health should be developed to improve the reporting quality of mixed methods studies. Through improved reporting, mixed methods can provide strong evidence to inform policy and practice. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  19. Optimal algorithm to improve the calculation accuracy of energy deposition for betavoltaic MEMS batteries design

    NASA Astrophysics Data System (ADS)

    Li, Sui-xian; Chen, Haiyang; Sun, Min; Cheng, Zaijun

    2009-11-01

    Aimed at improving the calculation accuracy when calculating the energy deposition of electrons traveling in solids, a method we call optimal subdivision number searching algorithm is proposed. When treating the energy deposition of electrons traveling in solids, large calculation errors are found, we are conscious of that it is the result of dividing and summing when calculating the integral. Based on the results of former research, we propose a further subdividing and summing method. For β particles with the energy in the entire spectrum span, the energy data is set only to be the integral multiple of keV, and the subdivision number is set to be from 1 to 30, then the energy deposition calculation error collections are obtained. Searching for the minimum error in the collections, we can obtain the corresponding energy and subdivision number pairs, as well as the optimal subdivision number. The method is carried out in four kinds of solid materials, Al, Si, Ni and Au to calculate energy deposition. The result shows that the calculation error is reduced by one order with the improved algorithm.

  20. Fourier spatial frequency analysis for image classification: training the training set

    NASA Astrophysics Data System (ADS)

    Johnson, Timothy H.; Lhamo, Yigah; Shi, Lingyan; Alfano, Robert R.; Russell, Stewart

    2016-04-01

    The Directional Fourier Spatial Frequencies (DFSF) of a 2D image can identify similarity in spatial patterns within groups of related images. A Support Vector Machine (SVM) can then be used to classify images if the inter-image variance of the FSF in the training set is bounded. However, if variation in FSF increases with training set size, accuracy may decrease as the size of the training set increases. This calls for a method to identify a set of training images from among the originals that can form a vector basis for the entire class. Applying the Cauchy product method we extract the DFSF spectrum from radiographs of osteoporotic bone, and use it as a matched filter set to eliminate noise and image specific frequencies, and demonstrate that selection of a subset of superclassifiers from within a set of training images improves SVM accuracy. Central to this challenge is that the size of the search space can become computationally prohibitive for all but the smallest training sets. We are investigating methods to reduce the search space to identify an optimal subset of basis training images.

  1. Optimization of High-Dimensional Functions through Hypercube Evaluation

    PubMed Central

    Abiyev, Rahib H.; Tunay, Mustafa

    2015-01-01

    A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions. PMID:26339237

  2. Mixed Sequence Reader: A Program for Analyzing DNA Sequences with Heterozygous Base Calling

    PubMed Central

    Chang, Chun-Tien; Tsai, Chi-Neu; Tang, Chuan Yi; Chen, Chun-Houh; Lian, Jang-Hau; Hu, Chi-Yu; Tsai, Chia-Lung; Chao, Angel; Lai, Chyong-Huey; Wang, Tzu-Hao; Lee, Yun-Shien

    2012-01-01

    The direct sequencing of PCR products generates heterozygous base-calling fluorescence chromatograms that are useful for identifying single-nucleotide polymorphisms (SNPs), insertion-deletions (indels), short tandem repeats (STRs), and paralogous genes. Indels and STRs can be easily detected using the currently available Indelligent or ShiftDetector programs, which do not search reference sequences. However, the detection of other genomic variants remains a challenge due to the lack of appropriate tools for heterozygous base-calling fluorescence chromatogram data analysis. In this study, we developed a free web-based program, Mixed Sequence Reader (MSR), which can directly analyze heterozygous base-calling fluorescence chromatogram data in .abi file format using comparisons with reference sequences. The heterozygous sequences are identified as two distinct sequences and aligned with reference sequences. Our results showed that MSR may be used to (i) physically locate indel and STR sequences and determine STR copy number by searching NCBI reference sequences; (ii) predict combinations of microsatellite patterns using the Federal Bureau of Investigation Combined DNA Index System (CODIS); (iii) determine human papilloma virus (HPV) genotypes by searching current viral databases in cases of double infections; (iv) estimate the copy number of paralogous genes, such as β-defensin 4 (DEFB4) and its paralog HSPDP3. PMID:22778697

  3. Real-time scheduling using minimum search

    NASA Technical Reports Server (NTRS)

    Tadepalli, Prasad; Joshi, Varad

    1992-01-01

    In this paper we consider a simple model of real-time scheduling. We present a real-time scheduling system called RTS which is based on Korf's Minimin algorithm. Experimental results show that the schedule quality initially improves with the amount of look-ahead search and tapers off quickly. So it sppears that reasonably good schedules can be produced with a relatively shallow search.

  4. Automatic Hypocenter Determination Method in JMA Catalog and its Application

    NASA Astrophysics Data System (ADS)

    Tamaribuchi, K.

    2017-12-01

    The number of detectable earthquakes around Japan has increased by developing the high-sensitivity seismic observation network. After the 2011 Tohoku-oki earthquake, the number of detectable earthquakes have dramatically increased due to its aftershocks and induced earthquakes. This enormous number of earthquakes caused inability of manually determination of all the hypocenters. The Japan Meteorological Agency (JMA), which produces the earthquake catalog in Japan, has developed a new automatic hypocenter determination method and started its operation from April 1, 2016. This method (named PF method; Phase combination Forward search method) can determine the hypocenters of earthquakes that occur simultaneously by searching for the optimal combination of P- and S-wave arrival times and the maximum amplitudes using a Bayesian estimation technique. In the 2016 Kumamoto earthquake sequence, we successfully detected about 70,000 aftershocks automatically during the period from April 14 to the end of May, and this method contributed to the real-time monitoring of the seismic activity. Furthermore, this method can be also applied to the Earthquake Early Warning (EEW). Application of this method for EEW is called the IPF method and has been used as the hypocenter determination method of the EEW system in JMA from December 2016. By developing this method further, it is possible to contribute to not only speeding up the catalog production, but also improving reliability of the early warning.

  5. Computational search for aflatoxin binding proteins

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Liu, Jinfeng; Zhang, Lujia; He, Xiao; Zhang, John Z. H.

    2017-10-01

    Aflatoxin is one of the mycotoxins that contaminate various food products. Among various aflatoxin types (B1, B2, G1, G2 and M1), aflatoxin B1 is the most important and the most toxic one. In this study, through computational screening, we found that several proteins may bind specifically with different type of aflatoxins. Combination of theoretical methods including target fishing, molecular docking, molecular dynamics (MD) simulation, MM/PBSA calculation were utilized to search for new aflatoxin B1 binding proteins. A recently developed method for calculating entropic contribution to binding free energy called interaction entropy (IE) was employed to compute the binding free energy between the protein and aflatoxin B1. Through comprehensive comparison, three proteins, namely, trihydroxynaphthalene reductase, GSK-3b, and Pim-1 were eventually selected as potent aflatoxin B1 binding proteins. GSK-3b and Pim-1 are drug targets of cancers or neurological diseases. GSK-3b is the strongest binder for aflatoxin B1.

  6. Crisis in science: in search for new theoretical foundations.

    PubMed

    Schroeder, Marcin J

    2013-09-01

    Recognition of the need for theoretical biology more than half century ago did not bring substantial progress in this direction. Recently, the need for new methods in science, including physics became clear. The breakthrough should be sought in answering the question "What is life?", which can help to explain the mechanisms of consciousness and consequently give insight into the way we comprehend reality. This could help in the search for new methods in the study of both physical and biological phenomena. However, to achieve this, new theoretical discipline will have to be developed with a very general conceptual framework and rigor of mathematical reasoning, allowing it to assume the leading role in science. Since its foundations are in the recognition of the role of life and consciousness in the epistemic process, it could be called biomathics. The prime candidates proposed here for being the fundamental concepts for biomathics are 'information' and 'information integration', with an appropriately general mathematical formalism. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization

    PubMed Central

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness. PMID:24592200

  8. Hierarchical artificial bee colony algorithm for RFID network planning optimization.

    PubMed

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.

  9. On Bayesian methods of exploring qualitative interactions for targeted treatment.

    PubMed

    Chen, Wei; Ghosh, Debashis; Raghunathan, Trivellore E; Norkin, Maxim; Sargent, Daniel J; Bepler, Gerold

    2012-12-10

    Providing personalized treatments designed to maximize benefits and minimizing harms is of tremendous current medical interest. One problem in this area is the evaluation of the interaction between the treatment and other predictor variables. Treatment effects in subgroups having the same direction but different magnitudes are called quantitative interactions, whereas those having opposite directions in subgroups are called qualitative interactions (QIs). Identifying QIs is challenging because they are rare and usually unknown among many potential biomarkers. Meanwhile, subgroup analysis reduces the power of hypothesis testing and multiple subgroup analyses inflate the type I error rate. We propose a new Bayesian approach to search for QI in a multiple regression setting with adaptive decision rules. We consider various regression models for the outcome. We illustrate this method in two examples of phase III clinical trials. The algorithm is straightforward and easy to implement using existing software packages. We provide a sample code in Appendix A. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Inventing and improving ribozyme function: rational design versus iterative selection methods

    NASA Technical Reports Server (NTRS)

    Breaker, R. R.; Joyce, G. F.

    1994-01-01

    Two major strategies for generating novel biological catalysts exist. One relies on our knowledge of biopolymer structure and function to aid in the 'rational design' of new enzymes. The other, often called 'irrational design', aims to generate new catalysts, in the absence of detailed physicochemical knowledge, by using selection methods to search a library of molecules for functional variants. Both strategies have been applied, with considerable success, to the remodeling of existing ribozymes and the development of ribozymes with novel catalytic function. The two strategies are by no means mutually exclusive, and are best applied in a complementary fashion to obtain ribozymes with the desired catalytic properties.

  11. Referential calls coordinate multi-species mobbing in a forest bird community.

    PubMed

    Suzuki, Toshitaka N

    2016-01-01

    Japanese great tits ( Parus minor ) use a sophisticated system of anti-predator communication when defending their offspring: they produce different mobbing calls for different nest predators (snake versus non-snake predators) and thereby convey this information to conspecifics (i.e. functionally referential call system). The present playback experiments revealed that these calls also serve to coordinate multi-species mobbing at nests; snake-specific mobbing calls attracted heterospecific individuals close to the sound source and elicited snake-searching behaviour, whereas non-snake mobbing calls attracted these birds at a distance. This study demonstrates for the first time that referential mobbing calls trigger different formations of multi-species mobbing parties.

  12. Searching for Vector-Like Quarks Using 36.1 fb-1 of Proton-Proton Collisions Decaying to Same-Charge Dileptons and Trileptons + b-Jets at √s = 13 TeV with the ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Jones, Sarah Louise

    Since the discovery of the Higgs boson in 2012, the search for new physics beyond the Standard Model has been greatly intensified. At the CERN Large Hadron Collider (LHC), ATLAS searches for new physics entail looking for new particles by colliding protons together. Presented here is a search for a new form of quark matter called Vector-like Quarks (VLQ), which are hypothetical particles that are expected to have mass around a few TeV. VLQ can come in a variety of forms and can couple to their Standard Model (SM) quark counterparts, particularly to the third generation. They are necessary in several beyond the SM theories in order to solve the hierarchy problem. This search uses 36.1 fb. {-1} of proton-proton collision data collected with the ATLAS detector at the LHC from August 2015 to October 2016. Only events with two leptons of the same charge, or three leptons, plus b-jets and high missing transverse energy are considered in the main analysis. This signature is rarely produced in the SM, which means the backgrounds in this analysis are relatively low. This analysis is sensitive to specific predicted decay modes from pair production of an up-type VLQ with a charge of +2/3, T, an up-type VLQ with a charge of +5/3, T_{5/3}, and a down-type quark with a charge of -1/3, B, as well as single production of T_{5/3}. There is another theorized VLQ that this analysis is not sensitive to: B_{-4/3}, due to its primary decay mode, which is unable to produce the final-state signature of interest. A mostly frequentist statistical technique, called the CL_{S} Method, is used to interpret the data and set limits on the T, B, and T_{5/3} signal models. Using this method, exclusion limits are set at the 95% confidence level, effectively excluding T mass below 0.98 TeV, T_{5/3} mass below 1.2 TeV, and B mass below 1.0 TeV, assuming singlet branching ratios. Also, branching ratio independent limits are set on the T and B VLQ.

  13. Trust regions in Kriging-based optimization with expected improvement

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2016-06-01

    The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.

  14. Gene Regulatory Network Inferences Using a Maximum-Relevance and Maximum-Significance Strategy

    PubMed Central

    Liu, Wei; Zhu, Wen; Liao, Bo; Chen, Xiangtao

    2016-01-01

    Recovering gene regulatory networks from expression data is a challenging problem in systems biology that provides valuable information on the regulatory mechanisms of cells. A number of algorithms based on computational models are currently used to recover network topology. However, most of these algorithms have limitations. For example, many models tend to be complicated because of the “large p, small n” problem. In this paper, we propose a novel regulatory network inference method called the maximum-relevance and maximum-significance network (MRMSn) method, which converts the problem of recovering networks into a problem of how to select the regulator genes for each gene. To solve the latter problem, we present an algorithm that is based on information theory and selects the regulator genes for a specific gene by maximizing the relevance and significance. A first-order incremental search algorithm is used to search for regulator genes. Eventually, a strict constraint is adopted to adjust all of the regulatory relationships according to the obtained regulator genes and thus obtain the complete network structure. We performed our method on five different datasets and compared our method to five state-of-the-art methods for network inference based on information theory. The results confirm the effectiveness of our method. PMID:27829000

  15. FastBit Reference Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng

    2007-08-02

    An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. Themore » compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.« less

  16. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175

  17. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  18. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same goal in an automated fashion.

  19. A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2001-01-01

    An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.

  20. An Evaluation Framework for CALL

    ERIC Educational Resources Information Center

    McMurry, Benjamin L.; Williams, David Dwayne; Rich, Peter J.; Hartshorn, K. James

    2016-01-01

    Searching prestigious Computer-assisted Language Learning (CALL) journals for references to key publications and authors in the field of evaluation yields a short list. The "American Journal of Evaluation"--the flagship journal of the American Evaluation Association--is only cited once in both the "CALICO Journal and Language…

  1. Dual-Level Method for Estimating Multistructural Partition Functions with Torsional Anharmonicity.

    PubMed

    Bao, Junwei Lucas; Xing, Lili; Truhlar, Donald G

    2017-06-13

    For molecules with multiple torsions, an accurate evaluation of the molecular partition function requires consideration of multiple structures and their torsional-potential anharmonicity. We previously developed a method called MS-T for this problem, and it requires an exhaustive conformational search with frequency calculations for all the distinguishable conformers; this can become expensive for molecules with a large number of torsions (and hence a large number of structures) if it is carried out with high-level methods. In the present work, we propose a cost-effective method to approximate the MS-T partition function when there are a large number of structures, and we test it on a transition state that has eight torsions. This new method is a dual-level method that combines an exhaustive conformer search carried out by a low-level electronic structure method (for instance, AM1, which is very inexpensive) and selected calculations with a higher-level electronic structure method (for example, density functional theory with a functional that is suitable for conformational analysis and thermochemistry). To provide a severe test of the new method, we consider a transition state structure that has 8 torsional degrees of freedom; this transition state structure is formed along one of the reaction pathways of the hydrogen abstraction reaction (at carbon-1) of ketohydroperoxide (KHP; its IUPAC name is 4-hydroperoxy-2-pentanone) by OH radical. We find that our proposed dual-level method is able to significantly reduce the computational cost for computing MS-T partition functions for this test case with a large number of torsions and with a large number of conformers because we carry out high-level calculations for only a fraction of the distinguishable conformers found by the low-level method. In the example studied here, the dual-level method with 40 high-level optimizations (1.8% of the number of optimizations in a coarse-grained full search and 0.13% of the number of optimizations in a fine-grained full search) reproduces the full calculation of the high-level partition function within a factor of 1.0 to 2.0 from 200 to 1000 K. The error in the dual-level method can be further reduced to factors of 0.6 to 1.1 over the whole temperature interval from 200 to 2400 K by optimizing 128 structures (5.9% of the number of optimizations in a fine-grained full search and 0.41% of the number of optimizations in a fine-grained full search). These factor-of-two or better errors are small compared to errors up to a factor of 1.0 × 10 3 if one neglects multistructural effects for the case under study.

  2. SDF technology in location and navigation procedures: a survey of applications

    NASA Astrophysics Data System (ADS)

    Kelner, Jan M.; Ziółkowski, Cezary

    2017-04-01

    The basis for development the Doppler location method, also called the signal Doppler frequency (SDF) method or technology is the analytical solution of the wave equation for a mobile source. This paper presents an overview of the simulations, numerical analysis and empirical studies of the possibilities and the range of SDF method applications. In the paper, the various applications from numerous publications are collected and described. They mainly focus on the use of SDF method in: emitter positioning, electronic warfare, crisis management, search and rescue, navigation. The developed method is characterized by an innovative, unique property among other location methods, because it allows the simultaneous location of the many radio emitters. Moreover, this is the first method based on the Doppler effect, which allows positioning of transmitters, using a single mobile platform. In the paper, the results of the using SDF method by the other teams are also presented.

  3. Search for W‧ → t b bar in the lepton plus jets final state in proton-proton collisions at a centre-of-mass energy of √{ s} = 8 TeV with the ATLAS detector

    NASA Astrophysics Data System (ADS)

    Aad, G.; Abbott, B.; Abdallah, J.; Abdel Khalek, S.; Abdinov, O.; Aben, R.; Abi, B.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Agatonovic-Jovin, T.; Aguilar-Saavedra, J. A.; Agustoni, M.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alio, L.; Alison, J.; Allbrooke, B. M. M.; Allison, L. J.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Altheimer, A.; Alvarez Gonzalez, B.; Alviggi, M. G.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Anduaga, X. S.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnal, V.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Auerbach, B.; Augsten, K.; Aurousseau, M.; Avolio, G.; Axen, B.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Baas, A. E.; Bacci, C.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Badescu, E.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J. T.; Baker, O. K.; Balek, P.; Balli, F.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Bansil, H. S.; Barak, L.; Baranov, S. P.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Bartsch, V.; Bassalat, A.; Basye, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Battistin, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, S.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bedikian, S.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, K.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez Garcia, J. A.; Benjamin, D. P.; Bensinger, J. R.; Bentvelsen, S.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Beringer, J.; Bernard, C.; Bernat, P.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertsche, C.; Bertsche, D.; Besana, M. I.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Bieniek, S. P.; Bierwagen, K.; Biesiada, J.; Biglietti, M.; Bilbao De Mendizabal, J.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boddy, C. R.; Boehler, M.; Boek, T. T.; Bogaerts, J. A.; Bogdanchikov, A. G.; Bogouch, A.; Bohm, C.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Borri, M.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutouil, S.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozic, I.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brazzale, S. F.; Brelier, B.; Brendlinger, K.; Brennan, A. J.; Brenner, R.; Bressler, S.; Bristow, K.; Bristow, T. M.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Brown, J.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bryngemark, L.; Buanes, T.; Buat, Q.; Bucci, F.; Buchholz, P.; Buckley, A. G.; Buda, S. I.; Budagov, I. A.; Buehrer, F.; Bugge, L.; Bugge, M. K.; Bulekov, O.; Bundock, A. C.; Burckhart, H.; Burdin, S.; Burghgrave, B.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Buszello, C. P.; Butler, B.; Butler, J. M.; Butt, A. I.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Byszewski, M.; Cabrera Urbán, S.; Caforio, D.; Cakir, O.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Caloba, L. P.; Calvet, D.; Calvet, S.; Camacho Toro, R.; Camarda, S.; Cameron, D.; Caminada, L. M.; Caminal Armadans, R.; Campana, S.; Campanelli, M.; Campoverde, A.; Canale, V.; Canepa, A.; Cano Bret, M.; Cantero, J.; Cantrill, R.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Castaneda-Miranda, E.; Castelli, A.; Castillo Gimenez, V.; Castro, N. F.; Catastini, P.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Cattani, G.; Caudron, J.; Cavaliere, V.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerio, B. C.; Cerny, K.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chalupkova, I.; Chang, P.; Chapleau, B.; Chapman, J. D.; Charfeddine, D.; Charlton, D. G.; Chau, C. C.; Chavez Barajas, C. A.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, L.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, Y.; Cheplakov, A.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiefari, G.; Childers, J. T.; Chilingarov, A.; Chiodini, G.; Chisholm, A. S.; Chislett, R. T.; Chitan, A.; Chizhov, M. V.; Chouridou, S.; Chow, B. K. B.; Chromek-Burckhart, D.; Chu, M. L.; Chudoba, J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciocio, A.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, P. J.; Clarke, R. N.; Cleland, W.; Clemens, J. C.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coffey, L.; Cogan, J. G.; Cole, B.; Cole, S.; Colijn, A. P.; Collot, J.; Colombo, T.; Compostella, G.; Conde Muiño, P.; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Consonni, S. M.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cooper-Smith, N. J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Côté, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Crispin Ortuzar, M.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cuhadar Donszelmann, T.; Cummings, J.; Curatolo, M.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; D'Auria, S.; D'Onofrio, M.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dafinca, A.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Daniells, A. C.; Danninger, M.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, E.; Davies, M.; Davignon, O.; Davison, A. R.; Davison, P.; Davygora, Y.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Nooij, L.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J. B.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dechenaux, B.; Dedovich, D. V.; Deigaard, I.; Del Peso, J.; Del Prete, T.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Mattia, A.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Dietzsch, T. A.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Doglioni, C.; Doherty, T.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Dris, M.; Dubbert, J.; Dube, S.; Dubreuil, E.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudziak, F.; Duflot, L.; Duguid, L.; Dührssen, M.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Duschinger, D.; Dwuznik, M.; Dyndal, M.; Ebke, J.; Edson, W.; Edwards, N. C.; Ehrenfeld, W.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Engelmann, R.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ernis, G.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Favareto, A.; Fayard, L.; Federic, P.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Fernandez Perez, S.; Ferrag, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, J.; Fisher, W. C.; Fitzgerald, E. A.; Flechl, M.; Fleck, I.; Fleischmann, P.; Fleischmann, S.; Fletcher, G. T.; Fletcher, G.; Flick, T.; Floderus, A.; Flores Castillo, L. R.; Flowerdew, M. J.; Formica, A.; Forti, A.; Fortin, D.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Franchino, S.; Francis, D.; Franconi, L.; Franklin, M.; Fraternali, M.; French, S. T.; Friedrich, C.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fulsom, B. G.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gao, J.; Gao, Y. S.; Garay Walls, F. M.; Garberson, F.; García, C.; García Navarro, J. E.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gatti, C.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Ge, P.; Gecse, Z.; Gee, C. N. P.; Geerts, D. A. A.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Gemmell, A.; Genest, M. H.; Gentile, S.; George, M.; George, S.; Gerbaudo, D.; Gershon, A.; Ghazlane, H.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giangiobbe, V.; Giannetti, P.; Gianotti, F.; Gibbard, B.; Gibson, S. M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giordano, R.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giugni, D.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Glonti, G. L.; Goblirsch-Kolb, M.; Goddard, J. R.; Godlewski, J.; Goeringer, C.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gomez Fajardo, L. S.; Gonçalo, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, L.; González de la Hoz, S.; Gonzalez Parra, G.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Gouighri, M.; Goujdami, D.; Goulette, M. P.; Goussiou, A. G.; Goy, C.; Grabas, H. M. X.; Graber, L.; Grabowska-Bold, I.; Grafström, P.; Grahn, K.-J.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Gray, H. M.; Graziani, E.; Grebenyuk, O. G.; Greenwood, Z. D.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grishkevich, Y. V.; Grivaz, J.-F.; Grohs, J. P.; Grohsjean, A.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Guan, L.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guicheney, C.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Gupta, S.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guttman, N.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Haefner, P.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Hall, D.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamer, M.; Hamilton, A.; Hamilton, S.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harper, D.; Harrington, R. D.; Harris, O. M.; Harrison, P. F.; Hartjes, F.; Hasegawa, M.; Hasegawa, S.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayashi, T.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, L.; Hejbal, J.; Helary, L.; Heller, C.; Heller, M.; Hellman, S.; Hellmich, D.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Hengler, C.; Henrichs, A.; Henriques Correia, A. M.; Henrot-Versille, S.; Herbert, G. H.; Hernández Jiménez, Y.; Herrberg-Schubert, R.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hoffmann, D.; Hohlfeld, M.; Holmes, T. R.; Hong, T. M.; Hooft van Huysduynen, L.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howard, J.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, X.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Hurwitz, M.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikematsu, K.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Inamaru, Y.; Ince, T.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Irles Quiles, A.; Isaksson, C.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Iturbe Ponce, J. M.; Iuppa, R.; Ivarsson, J.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jackson, B.; Jackson, M.; Jackson, P.; Jaekel, M. R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jakubek, J.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansen, H.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Joergensen, M. D.; Johansson, K. E.; Johansson, P.; Johns, K. A.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Joshi, K. D.; Jovicevic, J.; Ju, X.; Jung, C. A.; Jussel, P.; Juste Rozas, A.; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kajomovitz, E.; Kalderon, C. W.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneda, M.; Kaneti, S.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kar, D.; Karakostas, K.; Karastathis, N.; Kareem, M. J.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kashif, L.; Kasieczka, G.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Katre, A.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Kazarinov, M. Y.; Keeler, R.; Kehoe, R.; Keil, M.; Keller, J. S.; Kempster, J. J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Kessoku, K.; Keung, J.; Keyes, R. A.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Kharlamov, A.; Khodinov, A.; Khomich, A.; Khoo, T. J.; Khoriauli, G.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kim, H. Y.; Kim, H.; Kim, S. H.; Kimura, N.; Kind, O.; King, B. T.; King, M.; King, R. S. B.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kiuchi, K.; Kladiva, E.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Klok, P. F.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Kogan, L. A.; Kohlmann, S.; Kohout, Z.; Kohriki, T.; Koi, T.; Kolanoski, H.; Koletsou, I.; Koll, J.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; König, S.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Korotkov, V. A.; Kortner, O.; Kortner, S.; Kostyukhin, V. V.; Kotov, V. M.; Kotwal, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kreiss, S.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Kruker, T.; Krumnack, N.; Krumshteyn, Z. V.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuehn, S.; Kugel, A.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunigo, T.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kurumida, R.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kyriazopoulos, D.; La Rosa, A.; La Rotonda, L.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Laier, H.; Lambourne, L.; Lammers, S.; Lampen, C. L.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, H.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Leontsinis, S.; Leroy, C.; Lester, C. G.; Lester, C. M.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, A.; Lewis, G. H.; Leyko, A. M.; Leyton, M.; Li, B.; Li, B.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, S.; Li, Y.; Liang, Z.; Liao, H.; Liberti, B.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limbach, C.; Limosani, A.; Lin, S. C.; Lin, T. H.; Linde, F.; Lindquist, B. E.; Linnemann, J. T.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y.; Livan, M.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo Sterzo, F.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Lombardo, V. P.; Long, B. A.; Long, J. D.; Long, R. E.; Lopes, L.; Lopez Mateos, D.; Lopez Paredes, B.; Lopez Paz, I.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Loscutoff, P.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lowe, A. J.; Lu, F.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lungwitz, M.; Lynn, D.; Lysak, R.; Lytken, E.; Ma, H.; Ma, L. L.; Maccarrone, G.; Macchiolo, A.; Machado Miguens, J.; Macina, D.; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeno, M.; Maeno, T.; Maevskiy, A.; Magradze, E.; Mahboubi, K.; Mahlstedt, J.; Mahmoud, S.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Mal, P.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyshev, V. M.; Malyukov, S.; Mamuzic, J.; Mandelli, B.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Manfredini, A.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J. A.; Mann, A.; Manning, P. M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mantifel, R.; Mapelli, L.; March, L.; Marchand, J. F.; Marchiori, G.; Marcisovsky, M.; Marino, C. P.; Marjanovic, M.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti, L. F.; Marti-Garcia, S.; Martin, B.; Martin, B.; Martin, T. A.; Martin, V. J.; Martin dit Latour, B.; Martinez, H.; Martinez, M.; Martin-Haugh, S.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Massol, N.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazzaferro, L.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McCubbin, N. A.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Mechnich, J.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Melachrinos, C.; Mellado Garcia, B. R.; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mercurio, K. M.; Mergelmeyer, S.; Meric, N.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Merritt, H.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Middleton, R. P.; Migas, S.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mirabelli, G.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Mönig, K.; Monini, C.; Monk, J.; Monnier, E.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morgenstern, M.; Morii, M.; Morisbak, V.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Morton, A.; Morvaj, L.; Moser, H. G.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, K.; Mueller, T.; Mueller, T.; Muenstermann, D.; Munwes, Y.; Murillo Quijada, J. A.; Murray, W. J.; Musheghyan, H.; Musto, E.; Myagkov, A. G.; Myska, M.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagai, Y.; Nagano, K.; Nagarkar, A.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Nanava, G.; Naranjo Garcia, R. F.; Narayan, R.; Nattermann, T.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Nef, P. D.; Negri, A.; Negri, G.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nelson, T. K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nickerson, R. B.; Nicolaidou, R.; Nicquevert, B.; Nielsen, J.; Nikiforou, N.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolics, K.; Nikolopoulos, K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Norberg, S.; Nordberg, M.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; Nuti, F.; O'Brien, B. J.; O'grady, F.; O'Neil, D. C.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, M. I.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okamura, W.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Olchevski, A. G.; Olivares Pino, S. A.; Oliveira Damazio, D.; Oliver Garcia, E.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Oropeza Barrera, C.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero y Garzon, G.; Otono, H.; Ouchrif, M.; Ouellette, E. A.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Ovcharova, A.; Owen, M.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Padilla Aranda, C.; Pagáčová, M.; Pagan Griso, S.; Paganis, E.; Pahl, C.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Palka, M.; Pallin, D.; Palma, A.; Palmer, J. D.; Pan, Y. B.; Panagiotopoulou, E.; Panduro Vazquez, J. G.; Pani, P.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, M. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pasqualucci, E.; Passaggio, S.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N. D.; Pater, J. R.; Patricelli, S.; Pauly, T.; Pearce, J.; Pedersen, L. E.; Pedersen, M.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Peng, H.; Penning, B.; Penwell, J.; Perepelitsa, D. V.; Perez Codina, E.; Pérez García-Estañ, M. T.; Perini, L.; Pernegger, H.; Perrella, S.; Perrino, R.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petrolo, E.; Petrucci, F.; Pettersson, N. E.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Piegaia, R.; Pignotti, D. T.; Pilcher, J. E.; Pilkington, A. D.; Pina, J.; Pinamonti, M.; Pinder, A.; Pinfold, J. L.; Pingel, A.; Pinto, B.; Pires, S.; Pitt, M.; Pizio, C.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Pluth, D.; Poddar, S.; Podlyski, F.; Poettgen, R.; Poggioli, L.; Pohl, D.; Pohl, M.; Polesello, G.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Portell Bueso, X.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Pralavorio, P.; Pranko, A.; Prasad, S.; Pravahan, R.; Prell, S.; Price, D.; Price, J.; Price, L. E.; Prieur, D.; Primavera, M.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopapadaki, E.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Przysiezniak, H.; Ptacek, E.; Puddu, D.; Pueschel, E.; Puldon, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Qureshi, A.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Rajagopalan, S.; Rammensee, M.; Rangel-Smith, C.; Rao, K.; Rauscher, F.; Rave, T. C.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Readioff, N. P.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reisin, H.; Relich, M.; Rembser, C.; Ren, H.; Ren, Z. L.; Renaud, A.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Ridel, M.; Rieck, P.; Rieger, J.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Rodrigues, L.; Roe, S.; Røhne, O.; Rolli, S.; Romaniouk, A.; Romano, M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, M.; Rose, P.; Rosendahl, P. L.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rubinskiy, I.; Rud, V. I.; Rudolph, C.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryder, N. C.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Saddique, A.; Sadeh, I.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Saleem, M.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Sanchez Martinez, V.; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandoval, T.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Santoyo Castillo, I.; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sartisohn, G.; Sasaki, O.; Sasaki, Y.; Sauvage, G.; Sauvan, E.; Savard, P.; Savu, D. O.; Sawyer, C.; Sawyer, L.; Saxon, D. H.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaefer, R.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Scherzer, M. I.; Schiavi, C.; Schieck, J.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt, E.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schneider, B.; Schnellbach, Y. J.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schorlemmer, A. L. S.; Schott, M.; Schouten, D.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schroeder, C.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwarz, T. A.; Schwegler, Ph.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Schwoerer, M.; Sciacca, F. G.; Scifo, E.; Sciolla, G.; Scuri, F.; Scutti, F.; Searcy, J.; Sedov, G.; Sedykh, E.; Seema, P.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekula, S. J.; Selbach, K. E.; Seliverstov, D. M.; Sellers, G.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Serre, T.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Shoaleh Saadi, D.; Shochet, M. J.; Short, D.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Shushkevich, S.; Sicho, P.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, D.; Simoniello, R.; Sinervo, P.; Sinev, N. B.; Siragusa, G.; Sircar, A.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skottowe, H. P.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, K. M.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snidero, G.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Solans, C. A.; Solar, M.; Solc, J.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Song, H. Y.; Soni, N.; Sood, A.; Sopczak, A.; Sopko, B.; Sopko, V.; Sorin, V.; Sosebee, M.; Soualah, R.; Soueid, P.; Soukharev, A. M.; South, D.; Spagnolo, S.; Spanò, F.; Spearman, W. R.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; Spreitzer, T.; St. Denis, R. D.; Staerz, S.; Stahlman, J.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Staszewski, R.; Stavina, P.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stern, S.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, E.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Subramaniam, R.; Succurro, A.; Sugaya, Y.; Suhr, C.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, Y.; Svatos, M.; Swedish, S.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tanasijczuk, A. J.; Tannenwald, B. B.; Tannoury, N.; Tapprogge, S.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, F. E.; Taylor, G. N.; Taylor, W.; Teischinger, F. A.; Teixeira Dias Castanheira, M.; Teixeira-Dias, P.; Temming, K. K.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Therhaag, J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, P. D.; Thompson, R. J.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Thong, W. M.; Thun, R. P.; Tian, F.; Tibbetts, M. J.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tiouchichine, E.; Tipton, P.; Tisserant, S.; Todorov, T.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tollefson, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Topilin, N. D.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Tran, H. L.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; True, P.; Trzebinski, M.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turk Cakir, I.; Turra, R.; Turvey, A. J.; Tuts, P. M.; Tykhonov, A.; Tylmad, M.; Tyndel, M.; Uchida, K.; Ueda, I.; Ueno, R.; Ughetto, M.; Ugland, M.; Uhlenbrock, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urbaniec, D.; Urquijo, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J. A.; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; Van Der Leeuw, R.; van der Ster, D.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veloso, F.; Velz, T.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigne, R.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Virzi, J.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vladoiu, D.; Vlasak, M.; Vogel, A.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Waller, P.; Walsh, B.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Warsinsky, M.; Washbrook, A.; Wasicki, C.; Watkins, P. M.; Watson, A. T.; Watson, I. J.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wendland, D.; Weng, Z.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Wicke, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wijeratne, P. A.; Wildauer, A.; Wildt, M. A.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, A.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winter, B. T.; Wittgen, M.; Wittig, T.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wright, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wulf, E.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xiao, M.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yakabe, R.; Yamada, M.; Yamaguchi, H.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamamura, T.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yanush, S.; Yao, L.; Yao, W.-M.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yurkewicz, A.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zevi della Porta, G.; Zhang, D.; Zhang, F.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, R.; Zhang, X.; Zhang, Z.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, L.; Zhou, L.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Zinonos, Z.; Ziolkowski, M.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zurzolo, G.; Zutshi, V.; Zwalinski, L.

    2015-04-01

    A search for new charged massive gauge bosons, called W‧, is performed with the ATLAS detector at the LHC, in proton-proton collisions at a centre-of-mass energy of √{ s} = 8 TeV, using a dataset corresponding to an integrated luminosity of 20.3 fb-1. This analysis searches for W‧ bosons in the W‧ → t b bar decay channel in final states with electrons or muons, using a multivariate method based on boosted decision trees. The search covers masses between 0.5 and 3.0 TeV, for right-handed or left-handed W‧ bosons. No significant deviation from the Standard Model expectation is observed and limits are set on the W‧ → t b bar cross-section times branching ratio and on the W‧-boson effective couplings as a function of the W‧-boson mass using the CLs procedure. For a left-handed (right-handed) W‧ boson, masses below 1.70 (1.92) TeV are excluded at 95% confidence level.

  4. Search for W 1 → than t¯b in the lepton plus jets final state in proton-proton collisions at a centre-of-mass energy of √ s=8 TeV with the ATLAS detector

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2015-02-25

    A search for new charged massive gauge bosons, called W 1 , is performed with the ATLAS detector at the LHC, in proton–proton collisions at a centre-of-mass energy of √s=8 TeV, using a dataset corresponding to an integrated luminosity of 20.3 fb -1. This analysis searches for W 1 bosons in the W 1 →t¯b decay channel in final states with electrons or muons, using a multivariate method based on boosted decision trees. The search covers masses between 0.5 and 3.0 TeV, for right-handed or left-handed W 1 bosons. No significant deviation from the Standard Model expectation is observed andmore » limits are set on the W 1 →t¯b cross-section times branching ratio and on the W 1 -boson effective couplings as a function of the W -boson mass using the CL s procedure. For a left-handed (right-handed) W 1 boson, masses below 1.70 (1.92) TeV are excluded at 95% confidence level.« less

  5. One- and two-dimensional search of an equation of state using a newly released 2DRoptimize package

    NASA Astrophysics Data System (ADS)

    Jamal, M.; Reshak, A. H.

    2018-05-01

    A new package called 2DRoptimize has been released for performing two-dimensional searches of the equation of state (EOS) for rhombohedral, tetragonal, and hexagonal compounds. The package is compatible and available with the WIEN2k package. The 2DRoptimize package performs a convenient volume and c/a structure optimization. First, the package finds the best value for c/a and the associated energy for each volume. In the second step, it calculates the EoS. The package then finds the equation of the c/a ratio vs. volume to calculate the c/a ratio at the optimized volume. In the last stage, by using the optimized volume and c/a ratio, the 2DRoptimize package calculates a and c lattice constants for tetragonal and hexagonal compounds, as well as the a lattice constant with the α angle for rhombohedral compounds. We tested our new package based on several hexagonal, tetragonal, and rhombohedral structures, and the 2D search results for the EOS showed that this method is more accurate than 1D search. Our results agreed very well with the experimental data and they were better than previous theoretical calculations.

  6. Behavioural aspects of terrorism.

    PubMed

    Leistedt, Samuel J

    2013-05-10

    Behavioural and social sciences are useful in collecting and analysing intelligence data, understanding terrorism, and developing strategies to combat terrorism. This article aims to examine the psychopathological concepts of terrorism and discusses the developing roles for behavioural scientists. A systematic review was conducted of studies investigating behavioural aspects of terrorism. These studies were identified by a systematic search of databases, textbooks, and a supplementary manual search of references. Several fundamental concepts were identified that continue to influence the motives and the majority of the behaviours of those who support or engage in this kind of specific violence. Regardless of the psychological aspects and new roles for psychiatrists, the behavioural sciences will continue to be called upon to assist in developing better methods to gather and analyse intelligence, to understand terrorism, and perhaps to stem the radicalisation process. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Forensic imaging tools for law enforcement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SMITHPETER,COLIN L.; SANDISON,DAVID R.; VARGO,TIMOTHY D.

    2000-01-01

    Conventional methods of gathering forensic evidence at crime scenes are encumbered by difficulties that limit local law enforcement efforts to apprehend offenders and bring them to justice. Working with a local law-enforcement agency, Sandia National Laboratories has developed a prototype multispectral imaging system that can speed up the investigative search task and provide additional and more accurate evidence. The system, called the Criminalistics Light-imaging Unit (CLU), has demonstrated the capabilities of locating fluorescing evidence at crime scenes under normal lighting conditions and of imaging other types of evidence, such as untreated fingerprints, by direct white-light reflectance. CLU employs state ofmore » the art technology that provides for viewing and recording of the entire search process on videotape. This report describes the work performed by Sandia to design, build, evaluate, and commercialize CLU.« less

  8. [Development of domain specific search engines].

    PubMed

    Takai, T; Tokunaga, M; Maeda, K; Kaminuma, T

    2000-01-01

    As cyber space exploding in a pace that nobody has ever imagined, it becomes very important to search cyber space efficiently and effectively. One solution to this problem is search engines. Already a lot of commercial search engines have been put on the market. However these search engines respond with such cumbersome results that domain specific experts can not tolerate. Using a dedicate hardware and a commercial software called OpenText, we have tried to develop several domain specific search engines. These engines are for our institute's Web contents, drugs, chemical safety, endocrine disruptors, and emergent response for chemical hazard. These engines have been on our Web site for testing.

  9. Mixed-methods designs in mental health services research: a review.

    PubMed

    Palinkas, Lawrence A; Horwitz, Sarah M; Chamberlain, Patricia; Hurlburt, Michael S; Landsverk, John

    2011-03-01

    Despite increased calls for use of mixed-methods designs in mental health services research, how and why such methods are being used and whether there are any consistent patterns that might indicate a consensus about how such methods can and should be used are unclear. Use of mixed methods was examined in 50 peer-reviewed journal articles found by searching PubMed Central and 60 National Institutes of Health (NIH)-funded projects found by searching the CRISP database over five years (2005-2009). Studies were coded for aims and the rationale, structure, function, and process for using mixed methods. A notable increase was observed in articles published and grants funded over the study period. However, most did not provide an explicit rationale for using mixed methods, and 74% gave priority to use of quantitative methods. Mixed methods were used to accomplish five distinct types of study aims (assess needs for services, examine existing services, develop new or adapt existing services, evaluate services in randomized controlled trials, and examine service implementation), with three categories of rationale, seven structural arrangements based on timing and weighting of methods, five functions of mixed methods, and three ways of linking quantitative and qualitative data. Each study aim was associated with a specific pattern of use of mixed methods, and four common patterns were identified. These studies offer guidance for continued progress in integrating qualitative and quantitative methods in mental health services research consistent with efforts by NIH and other funding agencies to promote their use.

  10. Online and call center referral for endocrine surgical pathology within institutions.

    PubMed

    Dhillon, Vaninder K; Al Khadem, Mai G; Tufano, Ralph P; Russell, Jonathon O

    2017-10-08

    We hypothesized that self-referred patients to academic centers will be equally distributed between general surgery and otolaryngology departments that perform thyroid surgery. We sought to quantify disparities in the assignment of these self-referred patients who may reach an institution through call centers or online pathways. Cross-sectional survey. Key words "thyroid surgery" and "thyroid cancer" were used along with the name of the Accreditation Council for Graduate Medical Education-listed otolaryngology program in both Google and Bing search engines. The top three search results for departments were reviewed, and a tally was given to general surgery (GS), otolaryngology-head and neck surgery (OLHNS), or neither. A multidisciplinary center with both GS and OLHNS was recorded as "equitable." Telephone calls were tallied if they were directed to GS or OLHNS. Out of 400 program tallies, 117 (29.25%) patients were directed to GS and 50 (12.5%) were directed to OLHNS. An additional 181 (45.25%) were directed to neither group ("neither") (P < .05). Fifty-two (13%) of the patients were referred to multidisciplinary groups ("equitable"). A telephone call survey had 62 patients (62%) assigned to a general surgeon, as opposed to 38 (38%) for OLHNS (P < .05). Five institutions offered a multidisciplinary group when searching with Bing, and 11 were found by searching with Google. There is not an equal distribution of self-referred patients with thyroid surgical pathology. It may be important to increase the online presence of OLHNS surgeons who perform thyroid surgery at academic medical institutions. Multidisciplinary centers focused on thyroid and parathyroid surgical disease represents one model of assigning self-referred patients. NA Laryngoscope, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  11. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    PubMed

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  12. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    PubMed Central

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  13. A new numerical method for calculating extrema of received power for polarimetric SAR

    USGS Publications Warehouse

    Zhang, Y.; Zhang, Jiahua; Lu, Z.; Gong, W.

    2009-01-01

    A numerical method called cross-step iteration is proposed to calculate the maximal/minimal received power for polarized imagery based on a target's Kennaugh matrix. This method is much more efficient than the systematic method, which searches for the extrema of received power by varying the polarization ellipse angles of receiving and transmitting polarizations. It is also more advantageous than the Schuler method, which has been adopted by the PolSARPro package, because the cross-step iteration method requires less computation time and can derive both the maximal and minimal received powers, whereas the Schuler method is designed to work out only the maximal received power. The analytical model of received-power optimization indicates that the first eigenvalue of the Kennaugh matrix is the supremum of the maximal received power. The difference between these two parameters reflects the depolarization effect of the target's backscattering, which might be useful for target discrimination. ?? 2009 IEEE.

  14. Resampling to accelerate cross-correlation searches for continuous gravitational waves from binary systems

    NASA Astrophysics Data System (ADS)

    Meadors, Grant David; Krishnan, Badri; Papa, Maria Alessandra; Whelan, John T.; Zhang, Yuanhao

    2018-02-01

    Continuous-wave (CW) gravitational waves (GWs) call for computationally-intensive methods. Low signal-to-noise ratio signals need templated searches with long coherent integration times and thus fine parameter-space resolution. Longer integration increases sensitivity. Low-mass x-ray binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven CWs at strains reachable by current ground-based observatories. Binary orbital parameters induce phase modulation. This paper describes how resampling corrects binary and detector motion, yielding source-frame time series used for cross-correlation. Compared to the previous, detector-frame, templated cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO observing run (O1), resampling is about 20 × faster in the costliest, most-sensitive frequency bands. Speed-up factors depend on integration time and search setup. The speed could be reinvested into longer integration with a forecast sensitivity gain, 20 to 125 Hz median, of approximately 51%, or from 20 to 250 Hz, 11%, given the same per-band cost and setup. This paper's timing model enables future setup optimization. Resampling scales well with longer integration, and at 10 × unoptimized cost could reach respectively 2.83 × and 2.75 × median sensitivities, limited by spin-wandering. Then an O1 search could yield a marginalized-polarization upper limit reaching torque-balance at 100 Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with 2 × improved detectors.

  15. Environmental context explains Lévy and Brownian movement patterns of marine predators.

    PubMed

    Humphries, Nicolas E; Queiroz, Nuno; Dyer, Jennifer R M; Pade, Nicolas G; Musyl, Michael K; Schaefer, Kurt M; Fuller, Daniel W; Brunnschweiler, Juerg M; Doyle, Thomas K; Houghton, Jonathan D R; Hays, Graeme C; Jones, Catherine S; Noble, Leslie R; Wearmouth, Victoria J; Southall, Emily J; Sims, David W

    2010-06-24

    An optimal search theory, the so-called Lévy-flight foraging hypothesis, predicts that predators should adopt search strategies known as Lévy flights where prey is sparse and distributed unpredictably, but that Brownian movement is sufficiently efficient for locating abundant prey. Empirical studies have generated controversy because the accuracy of statistical methods that have been used to identify Lévy behaviour has recently been questioned. Consequently, whether foragers exhibit Lévy flights in the wild remains unclear. Crucially, moreover, it has not been tested whether observed movement patterns across natural landscapes having different expected resource distributions conform to the theory's central predictions. Here we use maximum-likelihood methods to test for Lévy patterns in relation to environmental gradients in the largest animal movement data set assembled for this purpose. Strong support was found for Lévy search patterns across 14 species of open-ocean predatory fish (sharks, tuna, billfish and ocean sunfish), with some individuals switching between Lévy and Brownian movement as they traversed different habitat types. We tested the spatial occurrence of these two principal patterns and found Lévy behaviour to be associated with less productive waters (sparser prey) and Brownian movements to be associated with productive shelf or convergence-front habitats (abundant prey). These results are consistent with the Lévy-flight foraging hypothesis, supporting the contention that organism search strategies naturally evolved in such a way that they exploit optimal Lévy patterns.

  16. Does survey method bias the description of northern goshawk nest-site structure?

    USGS Publications Warehouse

    Daw, S.K.; DeStefano, S.; Steidl, R.J.

    1998-01-01

    Past studies on the nesting habitat of northern goshawks (Accipiter gentilis) often relied on nests found opportunistically, either during timber-sale operations, by searching apparently 'good' goshawk habitat, or by other search methods where areas were preselected based on known forest conditions. Therefore, a bias in the characterization of habitat surrounding northern goshawk nest sites may exist toward late-forest structure (large trees, high canopy closure). This potential problem has confounded interpretation of data on nesting habitat of northern goshawks and added to uncertainty in the review process to consider the species for federal listing as threatened or endangered. Systematic survey methods, which strive for complete coverage of an area and often use broadcasts of conspecific calls, have been developed to overcome these potential biases, but no study has compared habitat characteristics around nests found opportunistically with those found systematically. We compared habitat characteristics in a 0.4-ha area around nests found systematically (n = 27) versus those found opportunistically (n = 22) on 3 national forests in eastern Oregon. We found that both density of large trees (systematic: x?? = 16.4 ?? 3.1 trees/ha; x?? ?? SE; opportunistic: x?? = 21.3 ?? 3.2; P = 0.56) and canopy closure (systematic: x?? = 72 ?? 2%; opportunistic: x?? = 70 ?? 2%; P = 0.61) were similar around nests found with either search method. Our results diminish concern that past survey methods mischaracterized northern goshawk nest-site structure. However, because northern goshawks nest in a variety of forest cover types with a wide range of structural characteristics, these results do not decrease the value of systematic survey methods in determining the most representative habitat descriptions for northern goshawks. Rigorous survey protocols allow repeatability and comparability of monitoring efforts and results over time.

  17. Discovering Authorities and Hubs in Different Topological Web Graph Structures.

    ERIC Educational Resources Information Center

    Meghabghab, George

    2002-01-01

    Discussion of citation analysis on the Web considers Web hyperlinks as a source to analyze citations. Topics include basic graph theory applied to Web pages, including matrices, linear algebra, and Web topology; and hubs and authorities, including a search technique called HITS (Hyperlink Induced Topic Search). (Author/LRW)

  18. Monte Carlo-based searching as a tool to study carbohydrate structure

    USDA-ARS?s Scientific Manuscript database

    A torsion angle-based Monte-Carlo searching routine was developed and applied to several carbohydrate modeling problems. The routine was developed as a Unix shell script that calls several programs, which allows it to be interfaced with multiple potential functions and various functions for evaluat...

  19. 76 FR 8788 - Riverside Casualty, Inc.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-15

    .../search/search.htm or by calling (202) 551-8090. Applicant's Representations 1. The Haskell Company (``THC..., construction, real estate and facility management services. All of the outstanding shares of THC's common stock are owned by The Haskell Company Employee Stock Ownership Trust (``THC ESOP''); Preston H. Haskell III...

  20. CINT - Center for Integrated Nanotechnologies

    Science.gov Websites

    Skip to Content Skip to Search Skip to Utility Navigation Skip to Top Navigation Search Site submit Facilities Discovery Platform Integration Lab User Facilities LUMOS Research Science Thrusts Integration Challenges Accepted User Proposals Data Management Becoming a User Call for Proposals Proposal Guidelines

  1. Subject Specific Databases: A Powerful Research Tool

    ERIC Educational Resources Information Center

    Young, Terrence E., Jr.

    2004-01-01

    Subject specific databases, or vortals (vertical portals), are databases that provide highly detailed research information on a particular topic. They are the smallest, most focused search tools on the Internet and, in recent years, they've been on the rise. Currently, more of the so-called "mainstream" search engines, subject directories, and…

  2. 76 FR 24505 - Great Lakes Pilotage Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-02

    ..., travel time and delay/detention time using historical traffic records. New Business: Review of bridge... that the public comment period may end before the time indicated, following the last call for comments... times at the call of the Secretary. Further information about GLPAC is available by searching on ``Great...

  3. SLAMMER: Seismic LAndslide Movement Modeled using Earthquake Records

    USGS Publications Warehouse

    Jibson, Randall W.; Rathje, Ellen M.; Jibson, Matthew W.; Lee, Yong W.

    2013-01-01

    This program is designed to facilitate conducting sliding-block analysis (also called permanent-deformation analysis) of slopes in order to estimate slope behavior during earthquakes. The program allows selection from among more than 2,100 strong-motion records from 28 earthquakes and allows users to add their own records to the collection. Any number of earthquake records can be selected using a search interface that selects records based on desired properties. Sliding-block analyses, using any combination of rigid-block (Newmark), decoupled, and fully coupled methods, are then conducted on the selected group of records, and results are compiled in both graphical and tabular form. Simplified methods for conducting each type of analysis are also included.

  4. Generalisation of the identity method for determination of high-order moments of multiplicity distributions with a software implementation

    NASA Astrophysics Data System (ADS)

    Maćkowiak-Pawłowska, Maja; Przybyła, Piotr

    2018-05-01

    The incomplete particle identification limits the experimentally-available phase space region for identified particle analysis. This problem affects ongoing fluctuation and correlation studies including the search for the critical point of strongly interacting matter performed on SPS and RHIC accelerators. In this paper we provide a procedure to obtain nth order moments of the multiplicity distribution using the identity method, generalising previously published solutions for n=2 and n=3. Moreover, we present an open source software implementation of this computation, called Idhim, that allows one to obtain the true moments of identified particle multiplicity distributions from the measured ones provided the response function of the detector is known.

  5. Earth Observing System Data Gateway

    NASA Technical Reports Server (NTRS)

    Pfister, Robin; McMahon, Joe; Amrhein, James; Sefert, Ed; Marsans, Lorena; Solomon, Mark; Nestler, Mark

    2006-01-01

    The Earth Observing System Data Gateway (EDG) software provides a "one-stop-shopping" standard interface for exploring and ordering Earth-science data stored at geographically distributed sites. EDG enables a user to do the following: 1) Search for data according to high-level criteria (e.g., geographic location, time, or satellite that acquired the data); 2) Browse the results of a search, viewing thumbnail sketches of data that satisfy the user s criteria; and 3) Order selected data for delivery to a specified address on a chosen medium (e.g., compact disk or magnetic tape). EDG consists of (1) a component that implements a high-level client/server protocol, and (2) a collection of C-language libraries that implement the passing of protocol messages between an EDG client and one or more EDG servers. EDG servers are located at sites usually called "Distributed Active Archive Centers" (DAACs). Each DAAC may allow access to many individual data items, called "granules" (e.g., single Landsat images). Related granules are grouped into collections called "data sets." EDG enables a user to send a search query to multiple DAACs simultaneously, inspect the resulting information, select browseable granules, and then order selected data from the different sites in a seamless fashion.

  6. Searching for exoplanets using artificial intelligence

    NASA Astrophysics Data System (ADS)

    Pearson, Kyle A.; Palafox, Leon; Griffith, Caitlin A.

    2018-02-01

    In the last decade, over a million stars were monitored to detect transiting planets. Manual interpretation of potential exoplanet candidates is labor intensive and subject to human error, the results of which are difficult to quantify. Here we present a new method of detecting exoplanet candidates in large planetary search projects which, unlike current methods uses a neural network. Neural networks, also called "deep learning" or "deep nets" are designed to give a computer perception into a specific problem by training it to recognize patterns. Unlike past transit detection algorithms deep nets learn to recognize planet features instead of relying on hand-coded metrics that humans perceive as the most representative. Our convolutional neural network is capable of detecting Earth-like exoplanets in noisy time-series data with a greater accuracy than a least-squares method. Deep nets are highly generalizable allowing data to be evaluated from different time series after interpolation without compromising performance. As validated by our deep net analysis of Kepler light curves, we detect periodic transits consistent with the true period without any model fitting. Our study indicates that machine learning will facilitate the characterization of exoplanets in future analysis of large astronomy data sets.

  7. Finding the global minimum: a fuzzy end elimination implementation

    NASA Technical Reports Server (NTRS)

    Keller, D. A.; Shibata, M.; Marcus, E.; Ornstein, R. L.; Rein, R.

    1995-01-01

    The 'fuzzy end elimination theorem' (FEE) is a mathematically proven theorem that identifies rotameric states in proteins which are incompatible with the global minimum energy conformation. While implementing the FEE we noticed two different aspects that directly affected the final results at convergence. First, the identification of a single dead-ending rotameric state can trigger a 'domino effect' that initiates the identification of additional rotameric states which become dead-ending. A recursive check for dead-ending rotameric states is therefore necessary every time a dead-ending rotameric state is identified. It is shown that, if the recursive check is omitted, it is possible to miss the identification of some dead-ending rotameric states causing a premature termination of the elimination process. Second, we examined the effects of removing dead-ending rotameric states from further considerations at different moments of time. Two different methods of rotameric state removal were examined for an order dependence. In one case, each rotamer found to be incompatible with the global minimum energy conformation was removed immediately following its identification. In the other, dead-ending rotamers were marked for deletion but retained during the search, so that they influenced the evaluation of other rotameric states. When the search was completed, all marked rotamers were removed simultaneously. In addition, to expand further the usefulness of the FEE, a novel method is presented that allows for further reduction in the remaining set of conformations at the FEE convergence. In this method, called a tree-based search, each dead-ending pair of rotamers which does not lead to the direct removal of either rotameric state is used to reduce significantly the number of remaining conformations. In the future this method can also be expanded to triplet and quadruplet sets of rotameric states. We tested our implementation of the FEE by exhaustively searching ten protein segments and found that the FEE identified the global minimum every time. For each segment, the global minimum was exhaustively searched in two different environments: (i) the segments were extracted from the protein and exhaustively searched in the absence of the surrounding residues; (ii) the segments were exhaustively searched in the presence of the remaining residues fixed at crystal structure conformations. We also evaluated the performance of the method for accurately predicting side chain conformations. We examined the influence of factors such as type and accuracy of backbone template used, and the restrictions imposed by the choice of potential function, parameterization and rotamer database. Conclusions are drawn on these results and future prospects are given.

  8. Development and tuning of an original search engine for patent libraries in medicinal chemistry.

    PubMed

    Pasche, Emilie; Gobeill, Julien; Kreim, Olivier; Oezdemir-Zaech, Fatma; Vachon, Therese; Lovis, Christian; Ruch, Patrick

    2014-01-01

    The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks clearly increases the effectiveness of the system. We conclude that different search tasks demand different information retrieval engines' settings in order to yield optimal end-user retrieval.

  9. Autocorrelation techniques for soft photogrammetry

    NASA Astrophysics Data System (ADS)

    Yao, Wu

    In this thesis research is carried out on image processing, image matching searching strategies, feature type and image matching, and optimal window size in image matching. To make comparisons, the soft photogrammetry package SoftPlotter is used. Two aerial photographs from the Iowa State University campus high flight 94 are scanned into digital format. In order to create a stereo model from them, interior orientation, single photograph rectification and stereo rectification are done. Two new image matching methods, multi-method image matching (MMIM) and unsquare window image matching are developed and compared. MMIM is used to determine the optimal window size in image matching. Twenty four check points from four different types of ground features are used for checking the results from image matching. Comparison between these four types of ground feature shows that the methods developed here improve the speed and the precision of image matching. A process called direct transformation is described and compared with the multiple steps in image processing. The results from image processing are consistent with those from SoftPlotter. A modified LAN image header is developed and used to store the information about the stereo model and image matching. A comparison is also made between cross correlation image matching (CCIM), least difference image matching (LDIM) and least square image matching (LSIM). The quality of image matching in relation to ground features are compared using two methods developed in this study, the coefficient surface for CCIM and the difference surface for LDIM. To reduce the amount of computation in image matching, the best-track searching algorithm, developed in this research, is used instead of the whole range searching algorithm.

  10. The SOBANE risk management strategy and the Déparis method for the participatory screening of the risks.

    PubMed

    Malchaire, J B

    2004-08-01

    The first section of the document describes a risk-prevention strategy, called SOBANE, in four levels: screening, observation, analysis and expertise. The aim is to make risk prevention faster, more cost effective, and more effective in coordinating the contributions of the workers themselves, their management, the internal and external occupational health (OH) practitioners and the experts. These four levels are: screening, where the risk factors are detected by the workers and their management, and obvious solutions are implemented; observation, where the remaining problems are studied in more detail, one by one, and the reasons and the solutions are discussed in detail; analysis, where, when necessary, an OH practitioner is called upon to carry out appropriate measurements to develop specific solutions; expertise, where, in very sophisticated and rare cases, the assistance of an expert is called upon to solve a particular problem. The method for the participatory screening of the risks (in French: Dépistage Participatif des Risques), Déparis, is proposed for the first level screening of the SOBANE strategy. The work situation is systematically reviewed and all the aspects conditioning the easiness, the effectiveness and the satisfaction at work are discussed, in search of practical prevention measures. The points to be studied more in detail at level 2, observation, are identified. The method is carried out during a meeting of key workers and technical staff. The method proves to be simple, sparing in time and means and playing a significant role in the development of a dynamic plan of risk management and of a culture of dialogue in the company.

  11. The neurosciences and the search for a unified psychology: the science and esthetics of a single framework

    PubMed Central

    Stam, Henderikus J.

    2015-01-01

    The search for a so-called unified or integrated theory has long served as a goal for some psychologists, even if the search is often implicit. But if the established sciences do not have an explicitly unified set of theories, then why should psychology? After examining this question again I argue that psychology is in fact reasonably unified around its methods and its commitment to functional explanations, an indeterminate functionalism. The question of the place of the neurosciences in this framework is complex. On the one hand, the neuroscientific project will not likely renew and synthesize the disparate arms of psychology. On the other hand, their reformulation of what it means to be human will exert an influence in multiple ways. One way to capture that influence is to conceptualize the brain in terms of a technology that we interact with in a manner that we do not yet fully understand. In this way we maintain both a distance from neuro-reductionism and refrain from committing to an unfettered subjectivity. PMID:26500571

  12. A novel artificial bee colony algorithm based on modified search equation and orthogonal learning.

    PubMed

    Gao, Wei-feng; Liu, San-yang; Huang, Ling-ling

    2013-06-01

    The artificial bee colony (ABC) algorithm is a relatively new optimization technique which has been shown to be competitive to other population-based algorithms. However, ABC has an insufficiency regarding its solution search equation, which is good at exploration but poor at exploitation. To address this concerning issue, we first propose an improved ABC method called as CABC where a modified search equation is applied to generate a candidate solution to improve the search ability of ABC. Furthermore, we use the orthogonal experimental design (OED) to form an orthogonal learning (OL) strategy for variant ABCs to discover more useful information from the search experiences. Owing to OED's good character of sampling a small number of well representative combinations for testing, the OL strategy can construct a more promising and efficient candidate solution. In this paper, the OL strategy is applied to three versions of ABC, i.e., the standard ABC, global-best-guided ABC (GABC), and CABC, which yields OABC, OGABC, and OCABC, respectively. The experimental results on a set of 22 benchmark functions demonstrate the effectiveness and efficiency of the modified search equation and the OL strategy. The comparisons with some other ABCs and several state-of-the-art algorithms show that the proposed algorithms significantly improve the performance of ABC. Moreover, OCABC offers the highest solution quality, fastest global convergence, and strongest robustness among all the contenders on almost all the test functions.

  13. Roles for librarians in systematic reviews: a scoping review

    PubMed Central

    Spencer, Angela J.; Eldredge, Jonathan D.

    2018-01-01

    Objective What roles do librarians and information professionals play in conducting systematic reviews? Librarians are increasingly called upon to be involved in systematic reviews, but no study has considered all the roles librarians can perform. This inventory of existing and emerging roles aids in defining librarians’ systematic reviews services. Methods For this scoping review, the authors conducted controlled vocabulary and text-word searches in the PubMed; Library, Information Science & Technology Abstracts; and CINAHL databases. We separately searched for articles published in the Journal of the European Association for Health Information and Libraries, Evidence Based Library and Information Practice, the Journal of the Canadian Heath Libraries Association, and Hypothesis. We also text-word searched Medical Library Association annual meeting poster and paper abstracts. Results We identified 18 different roles filled by librarians and other information professionals in conducting systematic reviews from 310 different articles, book chapters, and presented papers and posters. Some roles were well known such as searching, source selection, and teaching. Other less documented roles included planning, question formulation, and peer review. We summarize these different roles and provide an accompanying bibliography of references for in-depth descriptions of these roles. Conclusion Librarians play central roles in systematic review teams, including roles that go beyond searching. This scoping review should encourage librarians who are fulfilling roles that are not captured here to document their roles in journal articles and poster and paper presentations. PMID:29339933

  14. Citizen Participation -- A Tool for Conflict Management on the Public Lands

    ERIC Educational Resources Information Center

    Irland, Lloyd C.

    1975-01-01

    The search for harmony in public land-use planning is a hopeless pursuit. A more realistic approach is a conflict management strategy that emphasizes concern for the planning process, rather than for the plan itself. The search for legitimate planning processes calls for the conscious building of citizen participation. (JG)

  15. Mining Hidden Gems Beneath the Surface: A Look At the Invisible Web.

    ERIC Educational Resources Information Center

    Carlson, Randal D.; Repman, Judi

    2002-01-01

    Describes resources for researchers called the Invisible Web that are hidden from the usual search engines and other tools and contrasts them with those resources available on the surface Web. Identifies specialized search tools, databases, and strategies that can be used to locate credible in-depth information. (Author/LRW)

  16. ACHP | News | ACHP Issue Spotlight: Transmission Lines in the West

    Science.gov Websites

    Search skip specific nav links Home arrow News arrow ACHP Announces GAO Report Calling for Improved Data on Historic Properties ACHP Announces GAO Report Calling for Improved Data on Historic Properties The U.S. Government Accountability Office (GAO) recently released a report entitled "Improved Data

  17. RJMCMC based Text Placement to Optimize Label Placement and Quantity

    NASA Astrophysics Data System (ADS)

    Touya, Guillaume; Chassin, Thibaud

    2018-05-01

    Label placement is a tedious task in map design, and its automation has long been a goal for researchers in cartography, but also in computational geometry. Methods that search for an optimal or nearly optimal solution that satisfies a set of constraints, such as label overlapping, have been proposed in the literature. Most of these methods mainly focus on finding the optimal position for a given set of labels, but rarely allow the removal of labels as part of the optimization. This paper proposes to apply an optimization technique called Reversible-Jump Markov Chain Monte Carlo that enables to easily model the removal or addition during the optimization iterations. The method, quite preliminary for now, is tested on a real dataset, and the first results are encouraging.

  18. 2013 - Life is a Cosmic Phenomenon : The "Search for Water" evolves into the "Search for Life"

    NASA Astrophysics Data System (ADS)

    Smith, William E.

    2013-03-01

    We propose that the 2013 data from the Kepler Mission (giving a current estimate of the number of earth-like planets in the habitable zone of sun-like stars as 144 billion), has caused a consciousness change in human belief in the probability of life off earth. This seems to have affected NASA's public statements which are now leaning to the more visionary mission goal of the "Search for Life" rather than the 1975-2012 focus of the "Search for Water". We propose that the first confirmed earth-like planet, expected to be announced later this year, be called "BORUCKI" in honour of the visionary USA scientist Bill Borucki, the father of the Kepler Mission. We explore the 2013 status of the Hoyle-Wickramasinghe Model of Panspermia, its hypothesis, propositions, experiments and evidence. We use the Karl Popper model for scientific hypotheses (1). Finally we explore Sir Fred Hoyle's vision of a planetary microbe defense system we call the Hoyle Shield. We explore the subsystem components of the shield and assess some options for these components using break-though technologies already available.

  19. Text Mining of the Classical Medical Literature for Medicines That Show Potential in Diabetic Nephropathy

    PubMed Central

    Zhang, Lei; Li, Yin; Guo, Xinfeng; May, Brian H.; Xue, Charlie C. L.; Yang, Lihong; Liu, Xusheng

    2014-01-01

    Objectives. To apply modern text-mining methods to identify candidate herbs and formulae for the treatment of diabetic nephropathy. Methods. The method we developed includes three steps: (1) identification of candidate ancient terms; (2) systemic search and assessment of medical records written in classical Chinese; (3) preliminary evaluation of the effect and safety of candidates. Results. Ancient terms Xia Xiao, Shen Xiao, and Xiao Shen were determined as the most likely to correspond with diabetic nephropathy and used in text mining. A total of 80 Chinese formulae for treating conditions congruent with diabetic nephropathy recorded in medical books from Tang Dynasty to Qing Dynasty were collected. Sao si tang (also called Reeling Silk Decoction) was chosen to show the process of preliminary evaluation of the candidates. It had promising potential for development as new agent for the treatment of diabetic nephropathy. However, further investigations about the safety to patients with renal insufficiency are still needed. Conclusions. The methods developed in this study offer a targeted approach to identifying traditional herbs and/or formulae as candidates for further investigation in the search for new drugs for modern disease. However, more effort is still required to improve our techniques, especially with regard to compound formulae. PMID:24744808

  20. An evolutionary algorithm for large traveling salesman problems.

    PubMed

    Tsai, Huai-Kuang; Yang, Jinn-Moon; Tsai, Yuan-Fang; Kao, Cheng-Yan

    2004-08-01

    This work proposes an evolutionary algorithm, called the heterogeneous selection evolutionary algorithm (HeSEA), for solving large traveling salesman problems (TSP). The strengths and limitations of numerous well-known genetic operators are first analyzed, along with local search methods for TSPs from their solution qualities and mechanisms for preserving and adding edges. Based on this analysis, a new approach, HeSEA is proposed which integrates edge assembly crossover (EAX) and Lin-Kernighan (LK) local search, through family competition and heterogeneous pairing selection. This study demonstrates experimentally that EAX and LK can compensate for each other's disadvantages. Family competition and heterogeneous pairing selections are used to maintain the diversity of the population, which is especially useful for evolutionary algorithms in solving large TSPs. The proposed method was evaluated on 16 well-known TSPs in which the numbers of cities range from 318 to 13509. Experimental results indicate that HeSEA performs well and is very competitive with other approaches. The proposed method can determine the optimum path when the number of cities is under 10,000 and the mean solution quality is within 0.0074% above the optimum for each test problem. These findings imply that the proposed method can find tours robustly with a fixed small population and a limited family competition length in reasonable time, when used to solve large TSPs.

  1. Static analysis of class invariants in Java programs

    NASA Astrophysics Data System (ADS)

    Bonilla-Quintero, Lidia Dionisia

    2011-12-01

    This paper presents a technique for the automatic inference of class invariants from Java bytecode. Class invariants are very important for both compiler optimization and as an aid to programmers in their efforts to reduce the number of software defects. We present the original DC-invariant analysis from Adam Webber, talk about its shortcomings and suggest several different ways to improve it. To apply the DC-invariant analysis to identify DC-invariant assertions, all that one needs is a monotonic method analysis function and a suitable assertion domain. The DC-invariant algorithm is very general; however, the method analysis can be highly tuned to the problem in hand. For example, one could choose shape analysis as the method analysis function and use the DC-invariant analysis to simply extend it to an analysis that would yield class-wide invariants describing the shapes of linked data structures. We have a prototype implementation: a system we refer to as "the analyzer" that infers DC-invariant unary and binary relations and provides them to the user in a human readable format. The analyzer uses those relations to identify unnecessary array bounds checks in Java programs and perform null-reference analysis. It uses Adam Webber's relational constraint technique for the class-invariant binary relations. Early results with the analyzer were very imprecise in the presence of "dirty-called" methods. A dirty-called method is one that is called, either directly or transitively, from any constructor of the class, or from any method of the class at a point at which a disciplined field has been altered. This result was unexpected and forced an extensive search for improved techniques. An important contribution of this paper is the suggestion of several ways to improve the results by changing the way dirty-called methods are handled. The new techniques expand the set of class invariants that can be inferred over Webber's original results. The technique that produces better results uses in-line analysis. Final results are promising: we can infer sound class invariants for full-scale, not just toy applications.

  2. Designing of skull defect implants using C1 rational cubic Bezier and offset curves

    NASA Astrophysics Data System (ADS)

    Mohamed, Najihah; Majid, Ahmad Abd; Piah, Abd Rahni Mt; Rajion, Zainul Ahmad

    2015-05-01

    Some of the reasons to construct skull implant are due to head trauma after an accident or an injury or an infection or because of tumor invasion or when autogenous bone is not suitable for replacement after a decompressive craniectomy (DC). The main objective of our study is to develop a simple method to redesign missing parts of the skull. The procedure begins with segmentation, data approximation, and estimation process of the outer wall by a C1 continuous curve. Its offset curve is used to generate the inner wall. A metaheuristic algorithm, called harmony search (HS) is a derivative-free real parameter optimization algorithm inspired from the musical improvisation process of searching for a perfect state of harmony. In this study, data approximation by a rational cubic Bézier function uses HS to optimize position of middle points and value of the weights. All the phases contribute significantly in making our proposed technique automated. Graphical examples of several postoperative skulls are displayed to show the effectiveness of our proposed method.

  3. Linkages Between Clinical Practices and Community Organizations for Prevention: A Literature Review and Environmental Scan

    PubMed Central

    Hinnant, Laurie W.; Kane, Heather; Horne, Joseph; McAleer, Kelly; Roussel, Amy

    2012-01-01

    Objectives. We conducted a literature review and environmental scan to develop a framework for interventions that utilize linkages between clinical practices and community organizations for the delivery of preventive services, and to identify and characterize these efforts. Methods. We searched 4 major health services and social science electronic databases and conducted an Internet search to identify examples of linkage interventions in the areas of tobacco cessation, obesity, nutrition, and physical activity. Results. We identified 49 interventions, of which 18 examples described their evaluation methods or reported any intervention outcomes. Few conducted evaluations that were rigorous enough to capture changes in intermediate or long-term health outcomes. Outcomes in these evaluations were primarily patient-focused and did not include organizational or linkage characteristics. Conclusions. An attractive option to increase the delivery of preventive services is to link primary care practices to community organizations; evidence is not yet conclusive, however, that such linkage interventions are effective. Findings provide recommendations to researchers and organizations that fund research, and call for a framework and metrics to study linkage interventions. PMID:22690974

  4. Conformational space annealing scheme in the inverse design of functional materials

    NASA Astrophysics Data System (ADS)

    Kim, Sunghyun; Lee, In-Ho; Lee, Jooyoung; Oh, Young Jun; Chang, Kee Joo

    2015-03-01

    Recently, the so-called inverse method has drawn much attention, in which specific electronic properties are initially assigned and target materials are subsequently searched. In this work, we develop a new scheme for the inverse design of functional materials, in which the conformational space annealing (CSA) algorithm for global optimization is combined with first-principles density functional calculations. To implement the CSA, we need a series of ingredients, (i) an objective function to minimize, (ii) a 'distance' measure between two conformations, (iii) a local enthalpy minimizer of a given conformation, (iv) ways to combine two parent conformations to generate a daughter one, (v) a special conformation update scheme, and (vi) an annealing method in the 'distance' parameter axis. We show the results of applications for searching for Si crystals with direct band gaps and the lowest-enthalpy phase of boron at a finite pressure and discuss the efficiency of the present scheme. This work is supported by the National Research Foundation of Korea (NRF) under Grant No. NRF-2005-0093845 and by Samsung Science and Technology Foundation under Grant No. SSTFBA1401-08.

  5. An imperialist competitive algorithm for virtual machine placement in cloud computing

    NASA Astrophysics Data System (ADS)

    Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza

    2017-05-01

    Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.

  6. RNA inverse folding using Monte Carlo tree search.

    PubMed

    Yang, Xiufeng; Yoshizoe, Kazuki; Taneda, Akito; Tsuda, Koji

    2017-11-06

    Artificially synthesized RNA molecules provide important ways for creating a variety of novel functional molecules. State-of-the-art RNA inverse folding algorithms can design simple and short RNA sequences of specific GC content, that fold into the target RNA structure. However, their performance is not satisfactory in complicated cases. We present a new inverse folding algorithm called MCTS-RNA, which uses Monte Carlo tree search (MCTS), a technique that has shown exceptional performance in Computer Go recently, to represent and discover the essential part of the sequence space. To obtain high accuracy, initial sequences generated by MCTS are further improved by a series of local updates. Our algorithm has an ability to control the GC content precisely and can deal with pseudoknot structures. Using common benchmark datasets for evaluation, MCTS-RNA showed a lot of promise as a standard method of RNA inverse folding. MCTS-RNA is available at https://github.com/tsudalab/MCTS-RNA .

  7. A novel adaptive Cuckoo search for optimal query plan generation.

    PubMed

    Gomathi, Ramalingam; Sharmila, Dhandapani

    2014-01-01

    The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.

  8. Searching Remotely Sensed Images for Meaningful Nested Gestalten

    NASA Astrophysics Data System (ADS)

    Michaelsen, E.; Muench, D.; Arens, M.

    2016-06-01

    Even non-expert human observers sometimes still outperform automatic extraction of man-made objects from remotely sensed data. We conjecture that some of this remarkable capability can be explained by Gestalt mechanisms. Gestalt algebra gives a mathematical structure capturing such part-aggregate relations and the laws to form an aggregate called Gestalt. Primitive Gestalten are obtained from an input image and the space of all possible Gestalt algebra terms is searched for well-assessed instances. This can be a very challenging combinatorial effort. The contribution at hand gives some tools and structures unfolding a finite and comparably small subset of the possible combinations. Yet, the intended Gestalten still are contained and found with high probability and moderate efforts. Experiments are made with images obtained from a virtual globe system, and use the SIFT method for extraction of the primitive Gestalten. Comparison is made with manually extracted ground-truth Gestalten salient to human observers.

  9. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGES

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; ...

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  10. PORTR: Pre-Operative and Post-Recurrence Brain Tumor Registration

    PubMed Central

    Niethammer, Marc; Akbari, Hamed; Bilello, Michel; Davatzikos, Christos; Pohl, Kilian M.

    2014-01-01

    We propose a new method for deformable registration of pre-operative and post-recurrence brain MR scans of glioma patients. Performing this type of intra-subject registration is challenging as tumor, resection, recurrence, and edema cause large deformations, missing correspondences, and inconsistent intensity profiles between the scans. To address this challenging task, our method, called PORTR, explicitly accounts for pathological information. It segments tumor, resection cavity, and recurrence based on models specific to each scan. PORTR then uses the resulting maps to exclude pathological regions from the image-based correspondence term while simultaneously measuring the overlap between the aligned tumor and resection cavity. Embedded into a symmetric registration framework, we determine the optimal solution by taking advantage of both discrete and continuous search methods. We apply our method to scans of 24 glioma patients. Both quantitative and qualitative analysis of the results clearly show that our method is superior to other state-of-the-art approaches. PMID:24595340

  11. AM: An Artificial Intelligence Approach to Discovery in Mathematics as Heuristic Search

    DTIC Science & Technology

    1976-07-01

    Artificial Intelligence Approach to Discovery in Mathematics as Heuristic Search by Douglas B. Len-t APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED (A...570 AM: An Artificial Intelligence Approach to Discovery in Mathematics as Heuristic Search by Douglas B. Lenat ABSTRACT A program, called "AM", is...While AM’s " approach " to empirical research may be used in other scientific domains, the main limitation (reliance on hindsight) will probably recur

  12. Trends in chemical ecology revealed with a personal computer program for searching data bases of scientific references and abstracts.

    PubMed

    Byers, J A

    1992-09-01

    A compiled program, JCE-REFS.EXE (coded in the QuickBASIC language), for use on IBM-compatible personal computers is described. The program converts a DOS text file of current B-I-T-S (BIOSIS Information Transfer System) or BIOSIS Previews references into a DOS file of citations, including abstracts, in a general style used by scientific journals. The latter file can be imported directly into a word processor or the program can convert the file into a random access data base of the references. The program can search the data base for up to 40 text strings with Boolean logic. Selected references in the data base can be exported as a DOS text file of citations. Using the search facility, articles in theJournal of Chemical Ecology from 1975 to 1991 were searched for certain key words in regard to semiochemicals, taxa, methods, chemical classes, and biological terms to determine trends in usage over the period. Positive trends were statistically significant in the use of the words: semiochemical, allomone, allelochemic, deterrent, repellent, plants, angiosperms, dicots, wind tunnel, olfactometer, electrophysiology, mass spectrometry, ketone, evolution, physiology, herbivore, defense, and receptor. Significant negative trends were found for: pheromone, vertebrates, mammals, Coleoptera, Scolytidae,Dendroctonus, lactone, isomer, and calling.

  13. Modeling the role of parallel processing in visual search.

    PubMed

    Cave, K R; Wolfe, J M

    1990-04-01

    Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.

  14. Rivals in the dark: how competition influences search in decisions under uncertainty.

    PubMed

    Phillips, Nathaniel D; Hertwig, Ralph; Kareev, Yaakov; Avrahami, Judith

    2014-10-01

    In choices between uncertain options, information search can increase the chances of distinguishing good from bad options. However, many choices are made in the presence of other choosers who may seize the better option while one is still engaged in search. How long do (and should) people search before choosing between uncertain options in the presence of such competition? To address this question, we introduce a new experimental paradigm called the competitive sampling game. We use both simulation and empirical data to compare search and choice between competitive and solitary environments. Simulation results show that minimal search is adaptive when one expects competitors to choose quickly or is uncertain about how long competitors will search. Descriptively, we observe that competition drastically reduces information search prior to choice. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Scientific Evaluation and Review of Claims in Health Care (SEaRCH): A Streamlined, Systematic, Phased Approach for Determining "What Works" in Healthcare.

    PubMed

    Jonas, Wayne B; Crawford, Cindy; Hilton, Lara; Elfenbaum, Pamela

    2017-01-01

    Answering the question of "what works" in healthcare can be complex and requires the careful design and sequential application of systematic methodologies. Over the last decade, the Samueli Institute has, along with multiple partners, developed a streamlined, systematic, phased approach to this process called the Scientific Evaluation and Review of Claims in Health Care (SEaRCH™). The SEaRCH process provides an approach for rigorously, efficiently, and transparently making evidence-based decisions about healthcare claims in research and practice with minimal bias. SEaRCH uses three methods combined in a coordinated fashion to help determine what works in healthcare. The first, the Claims Assessment Profile (CAP), seeks to clarify the healthcare claim and question, and its ability to be evaluated in the context of its delivery. The second method, the Rapid Evidence Assessment of the Literature (REAL © ), is a streamlined, systematic review process conducted to determine the quantity, quality, and strength of evidence and risk/benefit for the treatment. The third method involves the structured use of expert panels (EPs). There are several types of EPs, depending on the purpose and need. Together, these three methods-CAP, REAL, and EP-can be integrated into a strategic approach to help answer the question "what works in healthcare?" and what it means in a comprehensive way. SEaRCH is a systematic, rigorous approach for evaluating healthcare claims of therapies, practices, programs, or products in an efficient and stepwise fashion. It provides an iterative, protocol-driven process that is customized to the intervention, consumer, and context. Multiple communities, including those involved in health service and policy, can benefit from this organized framework, assuring that evidence-based principles determine which healthcare practices with the greatest promise are used for improving the public's health and wellness.

  16. Toward intelligent information system

    NASA Astrophysics Data System (ADS)

    Komatsu, Sanzo

    NASA/RECON, the predecessor of DIALOG System, was originally designed as a user friendly system for astronauts, so that they should not miss-operate the machine in spite of tension in the outer space. Since then, DIALOG has endeavoured to develop a series of user friendly systems, such as knowledge index, inbound gateway, as well as Version II. In this so-called end user searching era, DIALOG has released a series of front end systems successively; DIALOG Business Connection, DIALOG Medical Connection and OneSearch in 1986, early and late 1987 respectively. They are all called expert systems. In this paper, the features of each system are described in some detail and the remaining critical issues are also discussed.

  17. Distributed Efficient Similarity Search Mechanism in Wireless Sensor Networks

    PubMed Central

    Ahmed, Khandakar; Gregory, Mark A.

    2015-01-01

    The Wireless Sensor Network similarity search problem has received considerable research attention due to sensor hardware imprecision and environmental parameter variations. Most of the state-of-the-art distributed data centric storage (DCS) schemes lack optimization for similarity queries of events. In this paper, a DCS scheme with metric based similarity searching (DCSMSS) is proposed. DCSMSS takes motivation from vector distance index, called iDistance, in order to transform the issue of similarity searching into the problem of an interval search in one dimension. In addition, a sector based distance routing algorithm is used to efficiently route messages. Extensive simulation results reveal that DCSMSS is highly efficient and significantly outperforms previous approaches in processing similarity search queries. PMID:25751081

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  19. A protein-dependent side-chain rotamer library.

    PubMed

    Bhuyan, Md Shariful Islam; Gao, Xin

    2011-12-14

    Protein side-chain packing problem has remained one of the key open problems in bioinformatics. The three main components of protein side-chain prediction methods are a rotamer library, an energy function and a search algorithm. Rotamer libraries summarize the existing knowledge of the experimentally determined structures quantitatively. Depending on how much contextual information is encoded, there are backbone-independent rotamer libraries and backbone-dependent rotamer libraries. Backbone-independent libraries only encode sequential information, whereas backbone-dependent libraries encode both sequential and locally structural information. However, side-chain conformations are determined by spatially local information, rather than sequentially local information. Since in the side-chain prediction problem, the backbone structure is given, spatially local information should ideally be encoded into the rotamer libraries. In this paper, we propose a new type of backbone-dependent rotamer library, which encodes structural information of all the spatially neighboring residues. We call it protein-dependent rotamer libraries. Given any rotamer library and a protein backbone structure, we first model the protein structure as a Markov random field. Then the marginal distributions are estimated by the inference algorithms, without doing global optimization or search. The rotamers from the given library are then re-ranked and associated with the updated probabilities. Experimental results demonstrate that the proposed protein-dependent libraries significantly outperform the widely used backbone-dependent libraries in terms of the side-chain prediction accuracy and the rotamer ranking ability. Furthermore, without global optimization/search, the side-chain prediction power of the protein-dependent library is still comparable to the global-search-based side-chain prediction methods.

  20. Here's the Lineup for the "Dream Team"

    ERIC Educational Resources Information Center

    Broderick, John R.

    2008-01-01

    One of the nicest things about being on a search committee is getting to meet people from all over the campus, some of whom one had little or no contact with before. The downside of any search, though, despite some meals in classy restaurants, is the extra meetings, endless phone calls, numerous Equal Employment Opportunity and human-resources…

  1. Defense.gov - Special Report: Koran War Veterans Memorial

    Science.gov Websites

    Department of Defense Submit Search Korean War Veteran Memorial Korean War Special - Memorial Home Page - Photo Essay Memorial Honors Those Who Answered the Call From 1950 to 1953, the United States joined with War Veterans Memorial honors those Americans who answered the call, those who worked and fought under

  2. CALLing All Foreign Language Teachers: Computer-Assisted Language Learning in the Classroom

    ERIC Educational Resources Information Center

    Erben, Tony, Ed.; Sarieva, Iona, Ed.

    2008-01-01

    This book is a comprehensive guide to help foreign language teachers use technology in their classrooms. It offers the best ways to integrate technology into teaching for student-centered learning. CALL Activities include: Email; Building a Web site; Using search engines; Powerpoint; Desktop publishing; Creating sound files; iMovie; Internet chat;…

  3. Asymptotic formulae for likelihood-based tests of new physics

    NASA Astrophysics Data System (ADS)

    Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer

    2011-02-01

    We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.

  4. The pseudo-Boolean optimization approach to form the N-version software structure

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.

  5. Neural-network-assisted genetic algorithm applied to silicon clusters

    NASA Astrophysics Data System (ADS)

    Marim, L. R.; Lemes, M. R.; dal Pino, A.

    2003-03-01

    Recently, a new optimization procedure that combines the power of artificial neural-networks with the versatility of the genetic algorithm (GA) was introduced. This method, called neural-network-assisted genetic algorithm (NAGA), uses a neural network to restrict the search space and it is expected to speed up the solution of global optimization problems if some previous information is available. In this paper, we have tested NAGA to determine the ground-state geometry of Sin (10⩽n⩽15) according to a tight-binding total-energy method. Our results indicate that NAGA was able to find the desired global minimum of the potential energy for all the test cases and it was at least ten times faster than pure genetic algorithm.

  6. Hybrid intelligent optimization methods for engineering problems

    NASA Astrophysics Data System (ADS)

    Pehlivanoglu, Yasin Volkan

    The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and quantification studies, we improved new mutation strategies and operators to provide beneficial diversity within the population. We called this new approach as multi-frequency vibrational GA or PSO. They were applied to different aeronautical engineering problems in order to study the efficiency of these new approaches. These implementations were: applications to selected benchmark test functions, inverse design of two-dimensional (2D) airfoil in subsonic flow, optimization of 2D airfoil in transonic flow, path planning problems of autonomous unmanned aerial vehicle (UAV) over a 3D terrain environment, 3D radar cross section minimization problem for a 3D air vehicle, and active flow control over a 2D airfoil. As demonstrated by these test cases, we observed that new algorithms outperform the current popular algorithms. The principal role of this multi-frequency approach was to determine which individuals or particles should be mutated, when they should be mutated, and which ones should be merged into the population. The new mutation operators, when combined with a mutation strategy and an artificial intelligent method, such as, neural networks or fuzzy logic process, they provided local and global diversities during the reproduction phases of the generations. Additionally, the new approach also introduced random and controlled diversity. Due to still being population-based techniques, these methods were as robust as the plain GA or PSO algorithms. Based on the results obtained, it was concluded that the variants of the present multi-frequency vibrational GA and PSO were efficient algorithms, since they successfully avoided all local optima within relatively short optimization cycles.

  7. Identifying National Availability of Abortion Care and Distance From Major US Cities: Systematic Online Search.

    PubMed

    Cartwright, Alice F; Karunaratne, Mihiri; Barr-Walker, Jill; Johns, Nicole E; Upadhyay, Ushma D

    2018-05-14

    Abortion is a common medical procedure, yet its availability has become more limited across the United States over the past decade. Women who do not know where to go for abortion care may use the internet to find abortion facility information, and there appears to be more online searches for abortion in states with more restrictive abortion laws. While previous studies have examined the distances women must travel to reach an abortion provider, to our knowledge no studies have used a systematic online search to document the geographic locations and services of abortion facilities. The objective of our study was to describe abortion facilities and services available in the United States from the perspective of a potential patient searching online and to identify US cities where people must travel the farthest to obtain abortion care. In early 2017, we conducted a systematic online search for abortion facilities in every state and the largest cities in each state. We recorded facility locations, types of abortion services available, and facility gestational limits. We then summarized the frequencies by region and state. If the online information was incomplete or unclear, we called the facility using a mystery shopper method, which simulates the perspective of patients calling for services. We also calculated distance to the closest abortion facility from all US cities with populations of 50,000 or more. We identified 780 facilities through our online search, with the fewest in the Midwest and South. Over 30% (236/780, 30.3%) of all facilities advertised the provision of medication abortion services only; this proportion was close to 40% in the Northeast (89/233, 38.2%) and West (104/262, 39.7%). The lowest gestational limit at which services were provided was 12 weeks in Wyoming; the highest was 28 weeks in New Mexico. People in 27 US cities must travel over 100 miles (160 km) to reach an abortion facility; the state with the largest number of such cities is Texas (n=10). Online searches can provide detailed information about the location of abortion facilities and the types of services they provide. However, these facilities are not evenly distributed geographically, and many large US cities do not have an abortion facility. Long distances can push women to seek abortion in later gestations when care is even more limited. ©Alice F Cartwright, Mihiri Karunaratne, Jill Barr-Walker, Nicole E Johns, Ushma D Upadhyay. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 14.05.2018.

  8. Mining collections of compounds with Screening Assistant 2

    PubMed Central

    2012-01-01

    Background High-throughput screening assays have become the starting point of many drug discovery programs for large pharmaceutical companies as well as academic organisations. Despite the increasing throughput of screening technologies, the almost infinite chemical space remains out of reach, calling for tools dedicated to the analysis and selection of the compound collections intended to be screened. Results We present Screening Assistant 2 (SA2), an open-source JAVA software dedicated to the storage and analysis of small to very large chemical libraries. SA2 stores unique molecules in a MySQL database, and encapsulates several chemoinformatics methods, among which: providers management, interactive visualisation, scaffold analysis, diverse subset creation, descriptors calculation, sub-structure / SMART search, similarity search and filtering. We illustrate the use of SA2 by analysing the composition of a database of 15 million compounds collected from 73 providers, in terms of scaffolds, frameworks, and undesired properties as defined by recently proposed HTS SMARTS filters. We also show how the software can be used to create diverse libraries based on existing ones. Conclusions Screening Assistant 2 is a user-friendly, open-source software that can be used to manage collections of compounds and perform simple to advanced chemoinformatics analyses. Its modular design and growing documentation facilitate the addition of new functionalities, calling for contributions from the community. The software can be downloaded at http://sa2.sourceforge.net/. PMID:23327565

  9. Mining collections of compounds with Screening Assistant 2.

    PubMed

    Guilloux, Vincent Le; Arrault, Alban; Colliandre, Lionel; Bourg, Stéphane; Vayer, Philippe; Morin-Allory, Luc

    2012-08-31

    High-throughput screening assays have become the starting point of many drug discovery programs for large pharmaceutical companies as well as academic organisations. Despite the increasing throughput of screening technologies, the almost infinite chemical space remains out of reach, calling for tools dedicated to the analysis and selection of the compound collections intended to be screened. We present Screening Assistant 2 (SA2), an open-source JAVA software dedicated to the storage and analysis of small to very large chemical libraries. SA2 stores unique molecules in a MySQL database, and encapsulates several chemoinformatics methods, among which: providers management, interactive visualisation, scaffold analysis, diverse subset creation, descriptors calculation, sub-structure / SMART search, similarity search and filtering. We illustrate the use of SA2 by analysing the composition of a database of 15 million compounds collected from 73 providers, in terms of scaffolds, frameworks, and undesired properties as defined by recently proposed HTS SMARTS filters. We also show how the software can be used to create diverse libraries based on existing ones. Screening Assistant 2 is a user-friendly, open-source software that can be used to manage collections of compounds and perform simple to advanced chemoinformatics analyses. Its modular design and growing documentation facilitate the addition of new functionalities, calling for contributions from the community. The software can be downloaded at http://sa2.sourceforge.net/.

  10. A deviation display method for visualising data in mobile gamma-ray spectrometry.

    PubMed

    Kock, Peder; Finck, Robert R; Nilsson, Jonas M C; Ostlund, Karl; Samuelsson, Christer

    2010-09-01

    A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded (137)Cs and (241)Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPGe). The deviation display enhances positive significant changes while suppressing the natural background fluctuations. After an initialization time of about 10min this technique leads to a homogeneous display dominated by the background colour, where even small changes in spectral data are easy to discover. As this paper shows, the deviation display method works well for all tested gamma energies and natural background radiation levels and with both tested detector systems.

  11. How to improve your PubMed/MEDLINE searches: 3. advanced searching, MeSH and My NCBI.

    PubMed

    Fatehi, Farhad; Gray, Leonard C; Wootton, Richard

    2014-03-01

    Although the basic PubMed search is often helpful, the results may sometimes be non-specific. For more control over the search process you can use the Advanced Search Builder interface. This allows a targeted search in specific fields, with the convenience of being able to select the intended search field from a list. It also provides a history of your previous searches. The search history is useful to develop a complex search query by combining several previous searches using Boolean operators. For indexing the articles in MEDLINE, the NLM uses a controlled vocabulary system called MeSH. This standardised vocabulary solves the problem of authors, researchers and librarians who may use different terms for the same concept. To be efficient in a PubMed search, you should start by identifying the most appropriate MeSH terms and use them in your search where possible. My NCBI is a personal workspace facility available through PubMed and makes it possible to customise the PubMed interface. It provides various capabilities that can enhance your search performance.

  12. Comparison of Seven Methods for Boolean Factor Analysis and Their Evaluation by Information Gain.

    PubMed

    Frolov, Alexander A; Húsek, Dušan; Polyakov, Pavel Yu

    2016-03-01

    An usual task in large data set analysis is searching for an appropriate data representation in a space of fewer dimensions. One of the most efficient methods to solve this task is factor analysis. In this paper, we compare seven methods for Boolean factor analysis (BFA) in solving the so-called bars problem (BP), which is a BFA benchmark. The performance of the methods is evaluated by means of information gain. Study of the results obtained in solving BP of different levels of complexity has allowed us to reveal strengths and weaknesses of these methods. It is shown that the Likelihood maximization Attractor Neural Network with Increasing Activity (LANNIA) is the most efficient BFA method in solving BP in many cases. Efficacy of the LANNIA method is also shown, when applied to the real data from the Kyoto Encyclopedia of Genes and Genomes database, which contains full genome sequencing for 1368 organisms, and to text data set R52 (from Reuters 21578) typically used for label categorization.

  13. Calling Where It Counts: Subordinate Pied Babblers Target the Audience of Their Vocal Advertisements.

    PubMed

    Humphries, David J; Finch, Fiona M; Bell, Matthew B V; Ridley, Amanda R

    2015-01-01

    For territorial group-living species, opportunities to reproduce on the natal territory can be limited by a number of factors including the availability of resources within a territory, access to unrelated individuals, and monopolies on reproduction by dominant group members. Individuals looking to reproduce are therefore faced with the options of either waiting for a breeding opportunity to arise in the natal territory, or searching for reproductive opportunities in non-natal groups. In the cooperatively breeding Southern pied babbler, Turdoides bicolor, most individuals who achieve reproductive success do so through taking up dominant breeding positions within non-natal groups. For subordinate pied babblers therefore, searching for breeding opportunities in non-natal groups is of primary importance as this represents the major route to reproductive success. However, prospecting (where individuals leave the group to search for reproductive opportunities within other groups) is costly and individuals rapidly lose weight when not part of a group. Here we demonstrate that subordinate pied babblers adopt an alternative strategy for mate attraction by vocal advertisement from within their natal territories. We show that subordinates focus their calling efforts on the edges of their territory, and specifically near boundaries with neighbouring groups that have potential breeding partners (unrelated individuals of the opposite sex). In contrast to prospecting, calling individuals showed no body mass loss associated with this behaviour, suggesting that calling from within the group may provide a 'cheap' advertisement strategy. Additionally, we show that subordinates use information regarding the composition of neighbouring groups to target the greatest number of potential mating partners.

  14. 2013; life is a cosmic phenomenon: the search for water evolves into the search for life

    NASA Astrophysics Data System (ADS)

    Smith, William E.

    2013-09-01

    The 2013 data from the Kepler Mission gives a current estimate of the number of Earth-like planets in the habitable zone of sun-like stars in the Milky Way Galaxy, as 144 billion. We propose that this estimate has caused a consciousness change in human belief in the probability of life off Earth. This seems to have affected NASA's public statements which are now leaning to the more visionary mission goal of the "Search for Life" rather than the 1975-2012 focus of the "Search for Water". We propose that the first confirmed Earth-like planet, expected to be announced later this year, be called "BORUCKI" in honour of the visionary USA scientist Bill Borucki, the father of the Kepler Mission. We explore the 2013 status of the Hoyle-Wickramasinghe Model of Panspermia, its hypothesis, propositions, experiments and evidence. We use the Karl Popper model for scientific hypotheses (1). Finally we explore Sir Fred Hoyle's vision of a planetary microbe defense system we call the Hoyle Shield. We explore the subsystem components of the shield and assess some options for these components using break-though technologies already available.

  15. A novel algorithm for validating peptide identification from a shotgun proteomics search engine.

    PubMed

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J

    2013-03-01

    Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.

  16. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  17. Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks

    NASA Astrophysics Data System (ADS)

    Zhu, Shijia; Wang, Yadong

    2015-12-01

    Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.

  18. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass- Or Time-Optimal Solutions

    NASA Technical Reports Server (NTRS)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood-allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  19. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass-or Time-Optimal Solutions

    NASA Technical Reports Server (NTRS)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood---allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  20. First Report of Using Portable Unmanned Aircraft Systems (Drones) for Search and Rescue.

    PubMed

    Van Tilburg, Christopher

    2017-06-01

    Unmanned aircraft systems (UAS), colloquially called drones, are used commonly for military, government, and civilian purposes, including both commercial and consumer applications. During a search and rescue mission in Oregon, a UAS was used to confirm a fatality in a slot canyon; this eliminated the need for a dangerous rappel at night by rescue personnel. A second search mission in Oregon used several UAS to clear terrain. This allowed search of areas that were not accessible or were difficult to clear by ground personnel. UAS with cameras may be useful for searching, observing, and documenting missions. It is possible that UAS might be useful for delivering equipment in difficult areas and in communication. Copyright © 2017. Published by Elsevier Inc.

  1. Investigating the enhanced Best Performance Algorithm for Annual Crop Planning problem based on economic factors.

    PubMed

    Adewumi, Aderemi Oluyinka; Chetty, Sivashan

    2017-01-01

    The Annual Crop Planning (ACP) problem was a recently introduced problem in the literature. This study further expounds on this problem by presenting a new mathematical formulation, which is based on market economic factors. To determine solutions, a new local search metaheuristic algorithm is investigated which is called the enhanced Best Performance Algorithm (eBPA). eBPA's results are compared against two well-known local search metaheuristic algorithms; these include Tabu Search and Simulated Annealing. The results show the potential of the eBPA for continuous optimization problems.

  2. Echolocation calls of Poey's flower bat (Phyllonycteris poeyi) unlike those of other phyllostomids.

    PubMed

    Mora, Emanuel C; Macías, Silvio

    2007-05-01

    Unlike any other foraging phyllostomid bat studied to date, Poey's flower bats (Phyllonycteris poeyi-Phyllostomidae) emit relatively long (up to 7.2 ms), intense, single-harmonic echolocation calls. These calls are readily detectable at distances of at least 15 m. Furthermore, the echolocation calls contain only the first harmonic, which is usually filtered out in the vocal tract of phyllostomids. The foraging echolocation calls of P. poeyi are more like search-phase echolocation calls of sympatric aerial-feeding bats (Molossidae, Vespertilionidae, Mormoopidae). Intense, long, narrowband, single-harmonic echolocation calls focus acoustic energy maximizing range and favoring detection, which may be particularly important for cruising bats, like P. poeyi, when flying in the open. Flying in enclosed spaces, P. poeyi emit short, low-intensity, frequency-modulated, multiharmonic echolocation calls typical of other phyllostomids. This is the first report of a phyllostomid species emitting long, intense, single-harmonic echolocation calls with most energy in the first harmonic.

  3. Developing a search engine for pharmacotherapeutic information that is not published in biomedical journals.

    PubMed

    Do Pazo-Oubiña, F; Calvo Pita, C; Puigventós Latorre, F; Periañez-Párraga, L; Ventayol Bosch, P

    2011-01-01

    To identify publishers of pharmacotherapeutic information not found in biomedical journals that focuses on evaluating and providing advice on medicines and to develop a search engine to access this information. Compiling web sites that publish information on the rational use of medicines and have no commercial interests. Free-access web sites in Spanish, Galician, Catalan or English. Designing a search engine using the Google "custom search" application. Overall 159 internet addresses were compiled and were classified into 9 labels. We were able to recover the information from the selected sources using a search engine, which is called "AlquimiA" and available from http://www.elcomprimido.com/FARHSD/AlquimiA.htm. The main sources of pharmacotherapeutic information not published in biomedical journals were identified. The search engine is a useful tool for searching and accessing "grey literature" on the internet. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  4. A Method for Search Engine Selection using Thesaurus for Selective Meta-Search Engine

    NASA Astrophysics Data System (ADS)

    Goto, Shoji; Ozono, Tadachika; Shintani, Toramatsu

    In this paper, we propose a new method for selecting search engines on WWW for selective meta-search engine. In selective meta-search engine, a method is needed that would enable selecting appropriate search engines for users' queries. Most existing methods use statistical data such as document frequency. These methods may select inappropriate search engines if a query contains polysemous words. In this paper, we describe an search engine selection method based on thesaurus. In our method, a thesaurus is constructed from documents in a search engine and is used as a source description of the search engine. The form of a particular thesaurus depends on the documents used for its construction. Our method enables search engine selection by considering relationship between terms and overcomes the problems caused by polysemous words. Further, our method does not have a centralized broker maintaining data, such as document frequency for all search engines. As a result, it is easy to add a new search engine, and meta-search engines become more scalable with our method compared to other existing methods.

  5. Using geospatial technology to process 911 calls after Hurricanes Katrina and Rita: Chapter 3B in Science and the storms-the USGS response to the hurricanes of 2005

    USGS Publications Warehouse

    Conzelmann, Craig P.; Sleavin, William; Couvillion, Brady R.

    2007-01-01

    The flooding that ensued in the Greater New Orleans area after Hurricane Katrina left thousands of victims trapped and in need of emergency rescue. This paper describes the processing of raw 911-call data into search and rescue products used by emergency responders after the storm.

  6. Measurements of the Stiffness and Thickness of the Pavement Asphalt Layer Using the Enhanced Resonance Search Method

    PubMed Central

    Zakaria, Nur Mustakiza; Yusoff, Nur Izzi Md.; Hardwiyono, Sentot; Mohd Nayan, Khairul Anuar

    2014-01-01

    Enhanced resonance search (ERS) is a nondestructive testing method that has been created to evaluate the quality of a pavement by means of a special instrument called the pavement integrity scanner (PiScanner). This technique can be used to assess the thickness of the road pavement structure and the profile of shear wave velocity by using the principle of surface wave and body wave propagation. In this study, the ERS technique was used to determine the actual thickness of the asphaltic pavement surface layer, while the shear wave velocities obtained were used to determine its dynamic elastic modulus. A total of fifteen locations were identified and the results were then compared with the specifications of the Malaysian PWD, MDD UKM, and IKRAM. It was found that the value of the elastic modulus of materials is between 3929 MPa and 17726 MPa. A comparison of the average thickness of the samples with the design thickness of MDD UKM showed a difference of 20 to 60%. Thickness of the asphalt surface layer followed the specifications of Malaysian PWD and MDD UKM, while some of the values of stiffness obtained are higher than the standard. PMID:25276854

  7. Measurements of the stiffness and thickness of the pavement asphalt layer using the enhanced resonance search method.

    PubMed

    Zakaria, Nur Mustakiza; Yusoff, Nur Izzi Md; Hardwiyono, Sentot; Nayan, Khairul Anuar Mohd; El-Shafie, Ahmed

    2014-01-01

    Enhanced resonance search (ERS) is a nondestructive testing method that has been created to evaluate the quality of a pavement by means of a special instrument called the pavement integrity scanner (PiScanner). This technique can be used to assess the thickness of the road pavement structure and the profile of shear wave velocity by using the principle of surface wave and body wave propagation. In this study, the ERS technique was used to determine the actual thickness of the asphaltic pavement surface layer, while the shear wave velocities obtained were used to determine its dynamic elastic modulus. A total of fifteen locations were identified and the results were then compared with the specifications of the Malaysian PWD, MDD UKM, and IKRAM. It was found that the value of the elastic modulus of materials is between 3929 MPa and 17726 MPa. A comparison of the average thickness of the samples with the design thickness of MDD UKM showed a difference of 20 to 60%. Thickness of the asphalt surface layer followed the specifications of Malaysian PWD and MDD UKM, while some of the values of stiffness obtained are higher than the standard.

  8. Interactive genetic algorithm for user-centered design of distributed conservation practices in a watershed: An examination of user preferences in objective space and user behavior

    NASA Astrophysics Data System (ADS)

    Piemonti, Adriana Debora; Babbar-Sebens, Meghna; Mukhopadhyay, Snehasis; Kleinberg, Austin

    2017-05-01

    Interactive Genetic Algorithms (IGA) are advanced human-in-the-loop optimization methods that enable humans to give feedback, based on their subjective and unquantified preferences and knowledge, during the algorithm's search process. While these methods are gaining popularity in multiple fields, there is a critical lack of data and analyses on (a) the nature of interactions of different humans with interfaces of decision support systems (DSS) that employ IGA in water resources planning problems and on (b) the effect of human feedback on the algorithm's ability to search for design alternatives desirable to end-users. In this paper, we present results and analyses of observational experiments in which different human participants (surrogates and stakeholders) interacted with an IGA-based, watershed DSS called WRESTORE to identify plans of conservation practices in a watershed. The main goal of this paper is to evaluate how the IGA adapts its search process in the objective space to a user's feedback, and identify whether any similarities exist in the objective space of plans found by different participants. Some participants focused on the entire watershed, while others focused only on specific local subbasins. Additionally, two different hydrology models were used to identify any potential differences in interactive search outcomes that could arise from differences in the numerical values of benefits displayed to participants. Results indicate that stakeholders, in comparison to their surrogates, were more likely to use multiple features of the DSS interface to collect information before giving feedback, and dissimilarities existed among participants in the objective space of design alternatives.

  9. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.

    PubMed

    Wang, Xueyi

    2012-02-08

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.

  10. Parallel Computational Protein Design.

    PubMed

    Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang

    2017-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.

  11. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  12. Convolutional neural network for earthquake detection and location

    PubMed Central

    Perol, Thibaut; Gharbi, Michaël; Denolle, Marine

    2018-01-01

    The recent evolution of induced seismicity in Central United States calls for exhaustive catalogs to improve seismic hazard assessment. Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today’s most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. We leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for earthquake detection and location from a single waveform. We apply our technique to study the induced seismicity in Oklahoma, USA. We detect more than 17 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm is orders of magnitude faster than established methods. PMID:29487899

  13. The Mainz Neutrino Mass Experiment - New Results and Perspectives

    NASA Astrophysics Data System (ADS)

    Bonn, J.; Bornschein, B.; Bornschein, L.; Fickinger, L.; Flatt, B.; Kraus, Ch.; Otten, E. W.; Schall, J. P.; Ulrich, H.; Weinheimer, Ch.; Kazachenko, O.; Kovalik, A.

    2002-12-01

    Non-zero neutrino masses, strongly favoured by the recent atmospheric and solar neutrino experiments, have strong consequences for particle physics as well as for astrophysics and cosmology. The investigation of the tritium β spectrum near its endpoint measures the mass of the "electron neutrino m(νe)" (m2 (ν e ) = Σ |Uei |2 mi2 with neutrino mixing matrix U and neutrino mass eigenstates mi) and is the most sensitive of these so-called direct methods providing information complementary to the searches for neutrinoless double β decay. Tritium β decay is the ideal method to distinguish between hierarchical and degenerate neutrino mass models. Furthermore, neutrino masses up to about 1 eV/c2 are especially interesting for cosmology because of their contribution to the missing dark matter in the universe...

  14. Swarm Verification

    NASA Technical Reports Server (NTRS)

    Holzmann, Gerard J.; Joshi, Rajeev; Groce, Alex

    2008-01-01

    Reportedly, supercomputer designer Seymour Cray once said that he would sooner use two strong oxen to plow a field than a thousand chickens. Although this is undoubtedly wise when it comes to plowing a field, it is not so clear for other types of tasks. Model checking problems are of the proverbial "search the needle in a haystack" type. Such problems can often be parallelized easily. Alas, none of the usual divide and conquer methods can be used to parallelize the working of a model checker. Given that it has become easier than ever to gain access to large numbers of computers to perform even routine tasks it is becoming more and more attractive to find alternate ways to use these resources to speed up model checking tasks. This paper describes one such method, called swarm verification.

  15. DisArticle: a web server for SVM-based discrimination of articles on traditional medicine.

    PubMed

    Kim, Sang-Kyun; Nam, SeJin; Kim, SangHyun

    2017-01-28

    Much research has been done in Northeast Asia to show the efficacy of traditional medicine. While MEDLINE contains many biomedical articles including those on traditional medicine, it does not categorize those articles by specific research area. The aim of this study was to provide a method that searches for articles only on traditional medicine in Northeast Asia, including traditional Chinese medicine, from among the articles in MEDLINE. This research established an SVM-based classifier model to identify articles on traditional medicine. The TAK + HM classifier, trained with the features of title, abstract, keywords, herbal data, and MeSH, has a precision of 0.954 and a recall of 0.902. In particular, the feature of herbal data significantly increased the performance of the classifier. By using the TAK + HM classifier, a total of about 108,000 articles were discriminated as articles on traditional medicine from among all articles in MEDLINE. We also built a web server called DisArticle ( http://informatics.kiom.re.kr/disarticle ), in which users can search for the articles and obtain statistical data. Because much evidence-based research on traditional medicine has been published in recent years, it has become necessary to search for articles on traditional medicine exclusively in literature databases. DisArticle can help users to search for and analyze the research trends in traditional medicine.

  16. L1000CDS2: LINCS L1000 characteristic direction signatures search engine.

    PubMed

    Duan, Qiaonan; Reid, St Patrick; Clark, Neil R; Wang, Zichen; Fernandez, Nicolas F; Rouillard, Andrew D; Readhead, Ben; Tritsch, Sarah R; Hodos, Rachel; Hafner, Marc; Niepel, Mario; Sorger, Peter K; Dudley, Joel T; Bavari, Sina; Panchal, Rekha G; Ma'ayan, Avi

    2016-01-01

    The library of integrated network-based cellular signatures (LINCS) L1000 data set currently comprises of over a million gene expression profiles of chemically perturbed human cell lines. Through unique several intrinsic and extrinsic benchmarking schemes, we demonstrate that processing the L1000 data with the characteristic direction (CD) method significantly improves signal to noise compared with the MODZ method currently used to compute L1000 signatures. The CD processed L1000 signatures are served through a state-of-the-art web-based search engine application called L1000CDS 2 . The L1000CDS 2 search engine provides prioritization of thousands of small-molecule signatures, and their pairwise combinations, predicted to either mimic or reverse an input gene expression signature using two methods. The L1000CDS 2 search engine also predicts drug targets for all the small molecules profiled by the L1000 assay that we processed. Targets are predicted by computing the cosine similarity between the L1000 small-molecule signatures and a large collection of signatures extracted from the gene expression omnibus (GEO) for single-gene perturbations in mammalian cells. We applied L1000CDS 2 to prioritize small molecules that are predicted to reverse expression in 670 disease signatures also extracted from GEO, and prioritized small molecules that can mimic expression of 22 endogenous ligand signatures profiled by the L1000 assay. As a case study, to further demonstrate the utility of L1000CDS 2 , we collected expression signatures from human cells infected with Ebola virus at 30, 60 and 120 min. Querying these signatures with L1000CDS 2 we identified kenpaullone, a GSK3B/CDK2 inhibitor that we show, in subsequent experiments, has a dose-dependent efficacy in inhibiting Ebola infection in vitro without causing cellular toxicity in human cell lines. In summary, the L1000CDS 2 tool can be applied in many biological and biomedical settings, while improving the extraction of knowledge from the LINCS L1000 resource.

  17. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  18. Cyclical parthenogenesis algorithm for layout optimization of truss structures with frequency constraints

    NASA Astrophysics Data System (ADS)

    Kaveh, A.; Zolghadr, A.

    2017-08-01

    Structural optimization with frequency constraints is seen as a challenging problem because it is associated with highly nonlinear, discontinuous and non-convex search spaces consisting of several local optima. Therefore, competent optimization algorithms are essential for addressing these problems. In this article, a newly developed metaheuristic method called the cyclical parthenogenesis algorithm (CPA) is used for layout optimization of truss structures subjected to frequency constraints. CPA is a nature-inspired, population-based metaheuristic algorithm, which imitates the reproductive and social behaviour of some animal species such as aphids, which alternate between sexual and asexual reproduction. The efficiency of the CPA is validated using four numerical examples.

  19. GeoSearch: A lightweight broking middleware for geospatial resources discovery

    NASA Astrophysics Data System (ADS)

    Gui, Z.; Yang, C.; Liu, K.; Xia, J.

    2012-12-01

    With petabytes of geodata, thousands of geospatial web services available over the Internet, it is critical to support geoscience research and applications by finding the best-fit geospatial resources from the massive and heterogeneous resources. Past decades' developments witnessed the operation of many service components to facilitate geospatial resource management and discovery. However, efficient and accurate geospatial resource discovery is still a big challenge due to the following reasons: 1)The entry barriers (also called "learning curves") hinder the usability of discovery services to end users. Different portals and catalogues always adopt various access protocols, metadata formats and GUI styles to organize, present and publish metadata. It is hard for end users to learn all these technical details and differences. 2)The cost for federating heterogeneous services is high. To provide sufficient resources and facilitate data discovery, many registries adopt periodic harvesting mechanism to retrieve metadata from other federated catalogues. These time-consuming processes lead to network and storage burdens, data redundancy, and also the overhead of maintaining data consistency. 3)The heterogeneous semantics issues in data discovery. Since the keyword matching is still the primary search method in many operational discovery services, the search accuracy (precision and recall) is hard to guarantee. Semantic technologies (such as semantic reasoning and similarity evaluation) offer a solution to solve these issues. However, integrating semantic technologies with existing service is challenging due to the expandability limitations on the service frameworks and metadata templates. 4)The capabilities to help users make final selection are inadequate. Most of the existing search portals lack intuitive and diverse information visualization methods and functions (sort, filter) to present, explore and analyze search results. Furthermore, the presentation of the value-added additional information (such as, service quality and user feedback), which conveys important decision supporting information, is missing. To address these issues, we prototyped a distributed search engine, GeoSearch, based on brokering middleware framework to search, integrate and visualize heterogeneous geospatial resources. Specifically, 1) A lightweight discover broker is developed to conduct distributed search. The broker retrieves metadata records for geospatial resources and additional information from dispersed services (portals and catalogues) and other systems on the fly. 2) A quality monitoring and evaluation broker (i.e., QoS Checker) is developed and integrated to provide quality information for geospatial web services. 3) The semantic assisted search and relevance evaluation functions are implemented by loosely interoperating with ESIP Testbed component. 4) Sophisticated information and data visualization functionalities and tools are assembled to improve user experience and assist resource selection.

  20. Bare-Bones Teaching-Learning-Based Optimization

    PubMed Central

    Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye

    2014-01-01

    Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844

  1. Bare-bones teaching-learning-based optimization.

    PubMed

    Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye

    2014-01-01

    Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.

  2. Empirical comparison study of approximate methods for structure selection in binary graphical models.

    PubMed

    Viallon, Vivian; Banerjee, Onureena; Jougla, Eric; Rey, Grégoire; Coste, Joel

    2014-03-01

    Looking for associations among multiple variables is a topical issue in statistics due to the increasing amount of data encountered in biology, medicine, and many other domains involving statistical applications. Graphical models have recently gained popularity for this purpose in the statistical literature. In the binary case, however, exact inference is generally very slow or even intractable because of the form of the so-called log-partition function. In this paper, we review various approximate methods for structure selection in binary graphical models that have recently been proposed in the literature and compare them through an extensive simulation study. We also propose a modification of one existing method, that is shown to achieve good performance and to be generally very fast. We conclude with an application in which we search for associations among causes of death recorded on French death certificates. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Using a derivative-free optimization method for multiple solutions of inverse transport problems

    DOE PAGES

    Armstrong, Jerawan C.; Favorite, Jeffrey A.

    2016-01-14

    Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less

  4. Relabeling exchange method (REM) for learning in neural networks

    NASA Astrophysics Data System (ADS)

    Wu, Wen; Mammone, Richard J.

    1994-02-01

    The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

  5. X Views and Counting: Interest in Rape-Oriented Pornography as Gendered Microaggression.

    PubMed

    Makin, David A; Morczek, Amber L

    2016-07-01

    Academics and activists called to attention decades prior the importance of identifying, analyzing, and tracking the transmission of attitudes, behaviors, and norms correlated with violence against women. A specific call to attention reflected the media as a mode of transmission. This research builds on prior studies of media, with an emphasis on Internet search queries. Using Google search data, for the period 2004 to 2012, this research provides regional analysis of associated interest in rape-oriented pornography and pornographic hubs. Results indicate minor regional variations in interest, including the use of "BDSM" or "bondage/discipline, dominance/submission, and sadomasochism" as a foundational query for use in trend analysis. Interest in rape-oriented pornography by way of pornographic hubs is discussed in the context of microaggression. © The Author(s) 2015.

  6. MDTS: automatic complex materials design using Monte Carlo tree search.

    PubMed

    M Dieb, Thaer; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-01-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  7. MDTS: automatic complex materials design using Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Dieb, Thaer M.; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-12-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  8. MetaSEEk: a content-based metasearch engine for images

    NASA Astrophysics Data System (ADS)

    Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu

    1997-12-01

    Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.

  9. Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance

    DTIC Science & Technology

    2003-07-21

    Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance Vincent A. Cicirello CMU-RI-TR-03-27 Submitted in partial fulfillment...AND SUBTITLE Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...lead to the development of a search control framework, called QD-BEACON that uses online -generated statistical models of search performance to

  10. In Other Voices: Expanding the Educational Conversation. Proceedings of the Annual Meeting of the South Atlantic Philosophy of Education Society (36th, Greenville, SC, October 4-5, 1991).

    ERIC Educational Resources Information Center

    Strandberg, Warren, Ed.

    These proceedings include the following papers: "Reason and Romance in Argument and Conversation" (Margaret Buchman); "Conversation as a Romance of Reason: A Response to Margret Buchman" (James W. Garrison); "In Search of a Calling" (Thomas Buford); "Listening for the Call: A Response to Thomas Buford" (Peter Carbone); "Some Reconceptions in…

  11. Discovering Extrasolar Planets with Microlensing Surveys

    NASA Astrophysics Data System (ADS)

    Wambsganss, J.

    2016-06-01

    An astronomical survey is commonly understood as a mapping of a large region of the sky, either photometrically (possibly in various filters/wavelength ranges) or spectroscopically. Often, catalogs of objects are produced/provided as the main product or a by-product. However, with the advent of large CCD cameras and dedicated telescopes with wide-field imaging capabilities, it became possible in the early 1990s, to map the same region of the sky over and over again. In principle, such data sets could be combined to get very deep stacked images of the regions of interest. However, I will report on a completely different use of such repeated maps: Exploring the time domain for particular kinds of stellar variability, namely microlens-induced magnifications in search of exoplanets. Such a time-domain microlensing survey was originally proposed by Bohdan Paczynski in 1986 in order to search for dark matter objects in the Galactic halo. Only a few years later three teams started this endeavour. I will report on the history and current state of gravitational microlensing surveys. By now, routinely 100 million stars in the Galactic Bulge are monitored a few times per week by so-called survey teams. All stars with constant apparent brightness and those following known variability patterns are filtered out in order to detect the roughly 2000 microlensing events per year which are produced by stellar lenses. These microlensing events are identified "online" while still in their early phases and then monitored with much higher cadence by so-called follow-up teams. The most interesting of such events are those produced by a star-plus-planet lens. By now of order 30 exoplanets have been discovered by these combined microlensing surveys. Microlensing searches for extrasolar planets are complementary to other exoplanet search techniques. There are two particular advantages: The microlensing method is sensitive down to Earth-mass planets even with ground-based telecopes, and it can be easily used to determine the global abundance of planets in the Milky Way. Recent results of these microlensing surveys are presented and discussed, e. g. the discovery that on average every Milky Way star has at least one planet of Neptune mass or higher.

  12. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  13. Identifying e-cigarette vape stores: description of an online search methodology.

    PubMed

    Kim, Annice E; Loomis, Brett; Rhodes, Bryan; Eggers, Matthew E; Liedtke, Christopher; Porter, Lauren

    2016-04-01

    Although the overall impact of Electronic Nicotine Delivery Systems (ENDS) on public health is unclear, awareness, use, and marketing of the products have increased markedly in recent years. Identifying the increasing number of 'vape stores' that specialise in selling ENDS can be challenging given the lack of regulatory policies and licensing. This study assesses the utility of online search methods in identifying ENDS vape stores. We conducted online searches in Google Maps, Yelp, and YellowPages to identify listings of ENDS vape stores in Florida, and used a crowdsourcing platform to call and verify stores that primarily sold ENDS to consumers. We compared store listings generated from the online search and crowdsourcing methodology to list licensed tobacco and ENDS retailers from the Florida Department of Business and Professional Regulation. The combined results from all three online sources yielded a total of 403 ENDS vape stores. Nearly 32.5% of these stores were on the state tobacco licensure list, while 67.5% were not. Accuracy of online results was highest for Yelp (77.6%), followed by YellowPages (77.1%) and Google (53.0%). Using the online search methodology we identified more ENDS vape stores than were on the state tobacco licensure list. This approach may be a promising strategy to identify and track the growth of ENDS vape stores over time, especially in states without a systematic licensing requirement for such stores. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  14. DFT Performance Prediction in FFTW

    NASA Astrophysics Data System (ADS)

    Gu, Liang; Li, Xiaoming

    Fastest Fourier Transform in the West (FFTW) is an adaptive FFT library that generates highly efficient Discrete Fourier Transform (DFT) implementations. It is one of the fastest FFT libraries available and it outperforms many adaptive or hand-tuned DFT libraries. Its success largely relies on the huge search space spanned by several FFT algorithms and a set of compiler generated C code (called codelets) for small size DFTs. FFTW empirically finds the best algorithm by measuring the performance of different algorithm combinations. Although the empirical search works very well for FFTW, the search process does not explain why the best plan found performs best, and the search overhead grows polynomially as the DFT size increases. The opposite of empirical search is model-driven optimization. However, it is widely believed that model-driven optimization is inferior to empirical search and is particularly powerless to solve problems as complex as the optimization of DFT.

  15. Investigating the enhanced Best Performance Algorithm for Annual Crop Planning problem based on economic factors

    PubMed Central

    2017-01-01

    The Annual Crop Planning (ACP) problem was a recently introduced problem in the literature. This study further expounds on this problem by presenting a new mathematical formulation, which is based on market economic factors. To determine solutions, a new local search metaheuristic algorithm is investigated which is called the enhanced Best Performance Algorithm (eBPA). eBPA’s results are compared against two well-known local search metaheuristic algorithms; these include Tabu Search and Simulated Annealing. The results show the potential of the eBPA for continuous optimization problems. PMID:28792495

  16. Scaffold hopping in drug discovery using inductive logic programming.

    PubMed

    Tsunoyama, Kazuhisa; Amini, Ata; Sternberg, Michael J E; Muggleton, Stephen H

    2008-05-01

    In chemoinformatics, searching for compounds which are structurally diverse and share a biological activity is called scaffold hopping. Scaffold hopping is important since it can be used to obtain alternative structures when the compound under development has unexpected side-effects. Pharmaceutical companies use scaffold hopping when they wish to circumvent prior patents for targets of interest. We propose a new method for scaffold hopping using inductive logic programming (ILP). ILP uses the observed spatial relationships between pharmacophore types in pretested active and inactive compounds and learns human-readable rules describing the diverse structures of active compounds. The ILP-based scaffold hopping method is compared to two previous algorithms (chemically advanced template search, CATS, and CATS3D) on 10 data sets with diverse scaffolds. The comparison shows that the ILP-based method is significantly better than random selection while the other two algorithms are not. In addition, the ILP-based method retrieves new active scaffolds which were not found by CATS and CATS3D. The results show that the ILP-based method is at least as good as the other methods in this study. ILP produces human-readable rules, which makes it possible to identify the three-dimensional features that lead to scaffold hopping. A minor variant of a rule learnt by ILP for scaffold hopping was subsequently found to cover an inhibitor identified by an independent study. This provides a successful result in a blind trial of the effectiveness of ILP to generate rules for scaffold hopping. We conclude that ILP provides a valuable new approach for scaffold hopping.

  17. Cervical Cap

    MedlinePlus

    ... Videos for Educators Search English Español The Cervical Cap KidsHealth / For Teens / The Cervical Cap What's in ... Call the Doctor? Print What Is a Cervical Cap? A cervical cap is a small cup made ...

  18. Searching for gravitational waves from pulsars

    NASA Astrophysics Data System (ADS)

    Gill, Colin D.

    The work presented here looks at several aspects of searching for continuous gravitational waves from pulsars, often referred to simply as continuous waves or CWs. This begins with an examination of noise in the current generation of laser interferometer gravitational wave detectors in the region below ~100 Hz. This frequency region is of particular interest with regards to CW detection as two prime sources for a first CW detection, the Crab and Vela pulsars, are expected to emit CWs in this frequency range. The Crab pulsar's frequency lies very close to a strong noise line due to the 60 Hz mains electricity in the LIGO detectors. The types of noise generally present in this region are discussed. Also presented are investigations into the noise features present in the LIGO S6 data and the Virgo VSR2 data using a program called Fscan. A particular noise feature present during VSR2 was discovered with the use of Fscan, which I report on and show how it degrades the sensitivity of searches for CWs from the Vela pulsar using this data. I next present search results for CWs from the Vela pulsar using VSR2 and VSR4 data. Whilst these searches did not find any evidence for gravitational waves being present in the data, they were able to place upper limits on the strength of gravitational wave emission from Vela lower than the upper limit set by the pulsars spin-down, making it only the second pulsar for which this milestone has been achieved. The lowest upper limit derived from these searches confines the spin-down energy lost from Vela due to gravitational waves as just 9% of Vela's total spin-down energy. The data from VSR2 and VSR4 are also examined, analysis of hardware injections in these datasets verify the calibration of the data and the search method. Similar results are also presented for a search for CWs from the Crab pulsar, where data from VSR2, VSR3, VSR4, S5 and S6 are combined to produce an upper limit on the gravitational wave (GW) amplitude lower than has been previously possible, representing 0.5% of the energy lost by the pulsar as seen through its spin-down. The same search method is also applied to analyse data for another 110 known pulsars, with five of these being gamma-ray pulsars that have been timed by the Fermi satellite.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.

    This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less

  20. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.

  1. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  2. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    PubMed

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

  3. Design of multiplier-less sharp transition width non-uniform filter banks using gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Bindiya T., S.; Elias, Elizabeth

    2015-01-01

    In this paper, multiplier-less near-perfect reconstruction tree-structured filter banks are proposed. Filters with sharp transition width are preferred in filter banks in order to reduce the aliasing between adjacent channels. When sharp transition width filters are designed as conventional finite impulse response filters, the order of the filters will become very high leading to increased complexity. The frequency response masking (FRM) method is known to result in linear-phase sharp transition width filters with low complexity. It is found that the proposed design method, which is based on FRM, gives better results compared to the earlier reported results, in terms of the number of multipliers when sharp transition width filter banks are needed. To further reduce the complexity and power consumption, the tree-structured filter bank is made totally multiplier-less by converting the continuous filter bank coefficients to finite precision coefficients in the signed power of two space. This may lead to performance degradation and calls for the use of a suitable optimisation technique. In this paper, gravitational search algorithm is proposed to be used in the design of the multiplier-less tree-structured uniform as well as non-uniform filter banks. This design method results in uniform and non-uniform filter banks which are simple, alias-free, linear phase and multiplier-less and have sharp transition width.

  4. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information

    PubMed Central

    2013-01-01

    Background Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. Results We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user’s query, advanced data searching based on the specified user’s query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. Conclusions search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/. PMID:23452691

  5. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information.

    PubMed

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur

    2013-03-01

    Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user's query, advanced data searching based on the specified user's query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/.

  6. Calculating p-values and their significances with the Energy Test for large datasets

    NASA Astrophysics Data System (ADS)

    Barter, W.; Burr, C.; Parkes, C.

    2018-04-01

    The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.

  7. The Effect of Animated Banner Advertisements on a Visual Search Task

    DTIC Science & Technology

    2001-01-01

    experimental result calls into question previous advertising tips suggested by WebWeek, cited in [17]. In 1996, the online magazine recommended that site...prone in the presence of animated banners. Keywords Animation, visual search, banner advertisements , flashing INTRODUCTION As processor and Internet...is the best way to represent the selection tool in a toolbar, where each icon must fit in a small area? Photoshop and other popular painting programs

  8. Evaluation of Environmental Information Products for Search and Rescue Optimal Planning System (SAROPS) - Version for Public Release

    DTIC Science & Technology

    2008-02-01

    is called EFS-POM. EFS-POM is forced by surface atmospheric forcing (wind, heating / cooling , sea level pressure) and by boundary forcing derived from...Peter Olsson, University of Alaska Anchorage. Heating and cooling is given by the climatological monthly heat flux from COADS (Comprehensive Ocean...Environmental Information Products for Search and Rescue Optimal Planning System (SAROPS) - Version for Public Release FINAL REPORT February

  9. Manpower Issues Involving Visit, Board, Search, and Seizure (VBSS)

    DTIC Science & Technology

    2012-03-01

    article by Lolita C. Baldor, American forces flying off the guided-missile destroyer USS Kidd responded to a distress call from the Iranian vessel, the Al...no other 2 Lolita C. Baldor, Associated Press, “USS Kidd rescues Iran boat from pirates,” January 6...Public Affairs, “VBSS: Evolving the Mission,” Story number: NNS090425-03, http://www.navy.mil/search/display.asp?story_id=44692. Baldor, Lolita C

  10. Broken Bones (For Parents)

    MedlinePlus

    ... Safe Videos for Educators Search English Español Broken Bones KidsHealth / For Parents / Broken Bones Print en español Huesos rotos What Is a Broken Bone? A broken bone, also called a fracture, is ...

  11. Autism

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Autism KidsHealth / For Teens / Autism What's in this article? ... With Autism? Print en español Autismo What Is Autism? Autism (also called "autism spectrum disorder") is a ...

  12. Sepsis (For Parents)

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Sepsis KidsHealth / For Parents / Sepsis What's in this article? ... When to Call the Doctor Print What Is Sepsis? Sepsis is when the immune system responds to ...

  13. Parkinson's Disease

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Parkinson's Disease KidsHealth / For Kids / Parkinson's Disease What's in this ... symptoms of something called Parkinson's disease. What Is Parkinson's Disease? Parkinson's disease is a disorder of the central ...

  14. Defense.gov Special Report: Travels with Mullen - December 2010

    Science.gov Websites

    Department of Defense Submit Search December 2010 Top Stories Chairman Seeks to Restore Military Relationship two countries' stalled military-to-military relationship. Story Mullen Calls Cooperation Best Response

  15. First Aid: Influenza (Flu)

    MedlinePlus

    ... for Educators Search English Español First Aid: The Flu KidsHealth / For Parents / First Aid: The Flu Print ... tiredness What to Do If Your Child Has Flu Symptoms: Call your doctor. Encourage rest. Keep your ...

  16. Cradle Cap (For Parents)

    MedlinePlus

    ... Safe Videos for Educators Search English Español Cradle Cap (Infantile Seborrheic Dermatitis) KidsHealth / For Parents / Cradle Cap ( ... many babies develop called cradle cap. About Cradle Cap Cradle cap is the common term for seborrheic ...

  17. Eek! It's Eczema!

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Eczema KidsHealth / For Kids / Eczema What's in this article? ... need to worry. It's just eczema. What Is Eczema? Eczema (say: EK-zeh-ma) is also called ...

  18. Towards PubMed 2.0.

    PubMed

    Fiorini, Nicolas; Lipman, David J; Lu, Zhiyong

    2017-10-30

    Staff from the National Center for Biotechnology Information in the US describe recent improvements to the PubMed search engine and outline plans for the future, including a new experimental site called PubMed Labs.

  19. Knowledge Representation for Decision Making Agents

    DTIC Science & Technology

    2013-07-15

    knowledge map. This knowledge map is a dictionary data structure called tmap in the code. It represents a network of locations with a number [0,1...fillRandom(): Informed initial tmap distribution (randomly generated per node) with belief one. • initialBelief = 3 uses fillCenter(): normal...triggered on AllMyFMsHaveBeenInitialized. 2. Executes main.py • Initializes knowledge map labeled tmap . • Calls initialize search() – resets distanceTot and

  20. Raising the IQ in full-text searching via intelligent querying

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kero, R.; Russell, L.; Swietlik, C.

    1994-11-01

    Current Information Retrieval (IR) technologies allow for efficient access to relevant information, provided that user selected query terms coincide with the specific linguistical choices made by the authors whose works constitute the text-base. Therefore, the challenge is to enhance the limited searching capability of state-of-the-practice IR. This can be done either with augmented clients that overcome current server searching deficiencies, or with added capabilities that can augment searching algorithms on the servers. The technology being investigated is that of deductive databases, with a set of new techniques called cooperative answering. This technology utilizes semantic networks to allow for navigation betweenmore » possible query search term alternatives. The augmented search terms are passed to an IR engine and the results can be compared. The project utilizes the OSTI Environment, Safety and Health Thesaurus to populate the domain specific semantic network and the text base of ES&H related documents from the Facility Profile Information Management System as the domain specific search space.« less

  1. Is interstellar archeology possible?

    NASA Astrophysics Data System (ADS)

    Carrigan, Richard A.

    2012-09-01

    Searching for signatures of cosmic-scale archeological artifacts such as Dyson spheres is an interesting alternative to conventional radio SETI. Uncovering such an artifact does not require the intentional transmission of a signal on the part of the original civilization. This type of search is called interstellar archeology or sometimes cosmic archeology. A variety of interstellar archeology signatures is discussed including non-natural planetary atmospheric constituents, stellar doping, Dyson spheres, as well as signatures of stellar, and galactic-scale engineering. The concept of a Fermi bubble due to interstellar migration is reviewed in the discussion of galactic signatures. These potential interstellar archeological signatures are classified using the Kardashev scale. A modified Drake equation is introduced. With few exceptions interstellar archeological signatures are clouded and beyond current technological capabilities. However SETI for so-called cultural transmissions and planetary atmosphere signatures are within reach.

  2. Adaptive striping watershed segmentation method for processing microscopic images of overlapping irregular-shaped and multicentre particles.

    PubMed

    Xiao, X; Bai, B; Xu, N; Wu, K

    2015-04-01

    Oversegmentation is a major drawback of the morphological watershed algorithm. Here, we study and reveal that the oversegmentation is not only because of the irregular shapes of the particle images, which people are familiar with, but also because of some particles, such as ellipses, with more than one centre. A new parameter, the striping level, is introduced and the criterion for striping parameter is built to help find the right markers prior to segmentation. An adaptive striping watershed algorithm is established by applying a procedure, called the marker searching algorithm, to find the markers, which can effectively suppress the oversegmentation. The effectiveness of the proposed method is validated by analysing some typical particle images including the images of gold nanorod ensembles. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  3. A new way of searching for transients: the ADWO method and its results

    NASA Astrophysics Data System (ADS)

    Bagoly, Z.; Szecsi, D.; Ripa, J.; Racz, I. I.; Csabai, I.; Dobos, L.; Horvath, I.; Balazs, L. G.; Toth, L. V.

    2017-12-01

    With the detection of gravitational wave emissions from from merging compact objects, it is now more important than ever to effectively mine the data-set of gamma-satellites for non-triggered, short-duration transients. Hence we developed a new method called the Automatized Detector Weight Optimization (ADWO), applicable for space-borne detectors such as Fermi's GBM and RHESSI's Ge detectors. Provided that the trigger time of an astrophysical event is well known (as in the case of a gravitational wave detection) but the detector response matrix is uncertain, ADWO combines the data of all detectors and energy channels to provide the best signal-to-noise ratio. We used ADWO to successfully identify any potential electromagnetic counterpart of gravitational wave events, as well as to detect previously un-triggered short-duration GRBs in the data-sets.

  4. Simulation to Support Local Search in Trajectory Optimization Planning

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.; Venable, K. Brent; Lindsey, James

    2012-01-01

    NASA and the international community are investing in the development of a commercial transportation infrastructure that includes the increased use of rotorcraft, specifically helicopters and civil tilt rotors. However, there is significant concern over the impact of noise on the communities surrounding the transportation facilities. One way to address the rotorcraft noise problem is by exploiting powerful search techniques coming from artificial intelligence coupled with simulation and field tests to design low-noise flight profiles which can be tested in simulation or through field tests. This paper investigates the use of simulation based on predictive physical models to facilitate the search for low-noise trajectories using a class of automated search algorithms called local search. A novel feature of this approach is the ability to incorporate constraints directly into the problem formulation that addresses passenger safety and comfort.

  5. A review on quantum search algorithms

    NASA Astrophysics Data System (ADS)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  6. Evidence-based practice: extending the search to find material for the systematic review

    PubMed Central

    Helmer, Diane; Savoie, Isabelle; Green, Carolyn; Kazanjian, Arminée

    2001-01-01

    Background: Cochrane-style systematic reviews increasingly require the participation of librarians. Guidelines on the appropriate search strategy to use for systematic reviews have been proposed. However, research evidence supporting these recommendations is limited. Objective: This study investigates the effectiveness of various systematic search methods used to uncover randomized controlled trials (RCTs) for systematic reviews. Effectiveness is defined as the proportion of relevant material uncovered for the systematic review using extended systematic review search methods. The following extended systematic search methods are evaluated: searching subject-specific or specialized databases (including trial registries), hand searching, scanning reference lists, and communicating personally. Methods: Two systematic review projects were prospectively monitored regarding the method used to identify items as well as the type of items retrieved. The proportion of RCTs identified by each systematic search method was calculated. Results: The extended systematic search methods uncovered 29.2% of all items retrieved for the systematic reviews. The search of specialized databases was the most effective method, followed by scanning of reference lists, communicating personally, and hand searching. Although the number of items identified through hand searching was small, these unique items would otherwise have been missed. Conclusions: Extended systematic search methods are effective tools for uncovering material for the systematic review. The quality of the items uncovered has yet to be assessed and will be key in evaluating the value of the systematic search methods. PMID:11837256

  7. The influence of bat echolocation call duration and timing on auditory encoding of predator distance in noctuoid moths.

    PubMed

    Gordon, Shira D; Ter Hofstede, Hannah M

    2018-03-22

    Animals co-occur with multiple predators, making sensory systems that can encode information about diverse predators advantageous. Moths in the families Noctuidae and Erebidae have ears with two auditory receptor cells (A1 and A2) used to detect the echolocation calls of predatory bats. Bat communities contain species that vary in echolocation call duration, and the dynamic range of A1 is limited by the duration of sound, suggesting that A1 provides less information about bats with shorter echolocation calls. To test this hypothesis, we obtained intensity-response functions for both receptor cells across many moth species for sound pulse durations representing the range of echolocation call durations produced by bat species in northeastern North America. We found that the threshold and dynamic range of both cells varied with sound pulse duration. The number of A1 action potentials per sound pulse increases linearly with increasing amplitude for long-duration pulses, saturating near the A2 threshold. For short sound pulses, however, A1 saturates with only a few action potentials per pulse at amplitudes far lower than the A2 threshold for both single sound pulses and pulse sequences typical of searching or approaching bats. Neural adaptation was only evident in response to approaching bat sequences at high amplitudes, not search-phase sequences. These results show that, for short echolocation calls, a large range of sound levels cannot be coded by moth auditory receptor activity, resulting in no information about the distance of a bat, although differences in activity between ears might provide information about direction. © 2018. Published by The Company of Biologists Ltd.

  8. A novel approach for dimension reduction of microarray.

    PubMed

    Aziz, Rabia; Verma, C K; Srivastava, Namita

    2017-12-01

    This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A Systematic Search and Review of Adult-Targeted Overweight and Obesity Prevention Mass Media Campaigns and Their Evaluation: 2000-2017.

    PubMed

    Kite, James; Grunseit, Anne; Bohn-Goldbaum, Erika; Bellew, Bill; Carroll, Tom; Bauman, Adrian

    2018-01-01

    Mass media campaigns are a commonly used strategy in public health. However, no review has assessed whether the design and evaluation of overweight and obesity campaigns meets best practice recommendations. This study aimed to fill this gap. We systematically searched five databases for peer-reviewed articles describing adult-targeted obesity mass media campaigns published between 2000 and 2017, complemented by reference list searches and contact with authors and agencies responsible for the campaigns. We extracted data on campaign design, implementation, and evaluation from eligible publications and conducted a qualitative review of 29 publications reporting on 14 campaigns. We found a need for formative research with target audiences to ensure campaigns focus on the most salient issues. Further, we noted that most campaigns targeted individual behaviors, despite calls for campaigns to also focus upstream and to address social determinants of obesity. Television was the dominant communication channel but, with the rapid advance of digital media, evaluation of other channels, such as social media, is increasingly important. Finally, although evaluation methods varied in quality, the evidence suggests that campaigns can have an impact on intermediate outcomes, such as knowledge and attitudes. However, evidence is still limited as to whether campaigns can influence behavior change.

  10. Application of Platelet-Rich Plasma to Disorders of the Knee Joint

    PubMed Central

    Mandelbaum, Bert R.; McIlwraith, C. Wayne

    2013-01-01

    Importance. The promising therapeutic potential and regenerative properties of platelet-rich plasma (PRP) have rapidly led to its widespread clinical use in musculoskeletal injury and disease. Although the basic scientific rationale surrounding PRP products is compelling, the clinical application has outpaced the research. Objective. The purpose of this article is to examine the current concepts around the basic science of PRP application, different preparation systems, and clinical application of PRP in disorders in the knee. Evidence Acquisition. A systematic search of PubMed for studies that evaluated the basic science, preparation and clinical application of platelet concentrates was performed. The search used terms, including platelet-rich plasma or PRP preparation, activation, use in the knee, cartilage, ligament, and meniscus. Studies found in the initial search and related studies were reviewed. Results. A comprehensive review of the literature supports the potential use of PRP both nonoperatively and intraoperatively, but highlights the absence of large clinical studies and the lack of standardization between method, product, and clinical efficacy. Conclusions and Relevance. In addition to the call for more randomized, controlled clinical studies to assess the clinical effect of PRP, at this point, it is necessary to investigate PRP product composition and eventually have the ability to tailor the therapeutic product for specific indications. PMID:26069674

  11. Multidimensional indexing structure for use with linear optimization queries

    NASA Technical Reports Server (NTRS)

    Bergman, Lawrence David (Inventor); Castelli, Vittorio (Inventor); Chang, Yuan-Chi (Inventor); Li, Chung-Sheng (Inventor); Smith, John Richard (Inventor)

    2002-01-01

    Linear optimization queries, which usually arise in various decision support and resource planning applications, are queries that retrieve top N data records (where N is an integer greater than zero) which satisfy a specific optimization criterion. The optimization criterion is to either maximize or minimize a linear equation. The coefficients of the linear equation are given at query time. Methods and apparatus are disclosed for constructing, maintaining and utilizing a multidimensional indexing structure of database records to improve the execution speed of linear optimization queries. Database records with numerical attributes are organized into a number of layers and each layer represents a geometric structure called convex hull. Such linear optimization queries are processed by searching from the outer-most layer of this multi-layer indexing structure inwards. At least one record per layer will satisfy the query criterion and the number of layers needed to be searched depends on the spatial distribution of records, the query-issued linear coefficients, and N, the number of records to be returned. When N is small compared to the total size of the database, answering the query typically requires searching only a small fraction of all relevant records, resulting in a tremendous speedup as compared to linearly scanning the entire dataset.

  12. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: a methodological review.

    PubMed

    Cooper, Chris; Booth, Andrew; Britten, Nicky; Garside, Ruth

    2017-11-28

    The purpose and contribution of supplementary search methods in systematic reviews is increasingly acknowledged. Numerous studies have demonstrated their potential in identifying studies or study data that would have been missed by bibliographic database searching alone. What is less certain is how supplementary search methods actually work, how they are applied, and the consequent advantages, disadvantages and resource implications of each search method. The aim of this study is to compare current practice in using supplementary search methods with methodological guidance. Four methodological handbooks in informing systematic review practice in the UK were read and audited to establish current methodological guidance. Studies evaluating the use of supplementary search methods were identified by searching five bibliographic databases. Studies were included if they (1) reported practical application of a supplementary search method (descriptive) or (2) examined the utility of a supplementary search method (analytical) or (3) identified/explored factors that impact on the utility of a supplementary method, when applied in practice. Thirty-five studies were included in this review in addition to the four methodological handbooks. Studies were published between 1989 and 2016, and dates of publication of the handbooks ranged from 1994 to 2014. Five supplementary search methods were reviewed: contacting study authors, citation chasing, handsearching, searching trial registers and web searching. There is reasonable consistency between recommended best practice (handbooks) and current practice (methodological studies) as it relates to the application of supplementary search methods. The methodological studies provide useful information on the effectiveness of the supplementary search methods, often seeking to evaluate aspects of the method to improve effectiveness or efficiency. In this way, the studies advance the understanding of the supplementary search methods. Further research is required, however, so that a rational choice can be made about which supplementary search strategies should be used, and when.

  13. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  14. A Search Engine to Access PubMed Monolingual Subsets: Proof of Concept and Evaluation in French

    PubMed Central

    Schuers, Matthieu; Soualmia, Lina Fatima; Grosjean, Julien; Kerdelhué, Gaétan; Kergourlay, Ivan; Dahamna, Badisse; Darmoni, Stéfan Jacques

    2014-01-01

    Background PubMed contains numerous articles in languages other than English. However, existing solutions to access these articles in the language in which they were written remain unconvincing. Objective The aim of this study was to propose a practical search engine, called Multilingual PubMed, which will permit access to a PubMed subset in 1 language and to evaluate the precision and coverage for the French version (Multilingual PubMed-French). Methods To create this tool, translations of MeSH were enriched (eg, adding synonyms and translations in French) and integrated into a terminology portal. PubMed subsets in several European languages were also added to our database using a dedicated parser. The response time for the generic semantic search engine was evaluated for simple queries. BabelMeSH, Multilingual PubMed-French, and 3 different PubMed strategies were compared by searching for literature in French. Precision and coverage were measured for 20 randomly selected queries. The results were evaluated as relevant to title and abstract, the evaluator being blind to search strategy. Results More than 650,000 PubMed citations in French were integrated into the Multilingual PubMed-French information system. The response times were all below the threshold defined for usability (2 seconds). Two search strategies (Multilingual PubMed-French and 1 PubMed strategy) showed high precision (0.93 and 0.97, respectively), but coverage was 4 times higher for Multilingual PubMed-French. Conclusions It is now possible to freely access biomedical literature using a practical search tool in French. This tool will be of particular interest for health professionals and other end users who do not read or query sufficiently in English. The information system is theoretically well suited to expand the approach to other European languages, such as German, Spanish, Norwegian, and Portuguese. PMID:25448528

  15. Early vertical correction of the deep curve of Spee.

    PubMed

    Martins, Renato Parsekian

    2017-01-01

    Even though few technological advancements have occurred in Orthodontics recently, the search for more efficient treatments continues. This paper analyses how to accelerate and improve one of the most arduous phases of orthodontic treatment, i.e., correction of the curve of Spee. The leveling of a deep curve of Spee can happen simultaneously with the alignment phase through a method called Early Vertical Correction (EVC). This technique uses two cantilevers affixed to the initial flexible archwire. This paper describes the force system produced by EVC and how to control its side effects. The EVC can reduce treatment time in malocclusions with deep curves of Spee, by combining two phases of the therapy, which clinicians ordinarily pursue sequentially.

  16. Bohr Hamiltonian for γ = 30° with Davidson potential

    NASA Astrophysics Data System (ADS)

    Yigitoglu, Ibrahim; Gokbulut, Melek

    2018-03-01

    A γ-rigid solution of the Bohr Hamiltonian for γ = 30° is constructed with the Davidson potential in the β part. This solution is going to be called Z(4)-D. The energy eigenvalues and wave functions are obtained by using the analytic method developed by Nikiforov and Uvarov. The calculated intraband and interband B(E2) transitions rates are presented and compared with the Z(4) model predictions. The staggering behavior in γ-bands is considered to search Z(4) -D candidate nuclei. A variational procedure is applied to demonstrate that the Z(4) model is a solution of the critical point at the shape phase transition from spherical to rigid triaxial rotor.

  17. [Evidence-based medicine--method, critical appraisal and usefulness for professionalized business practice in medical].

    PubMed

    Portwich, P

    2005-05-01

    The concept of evidence-based medicine (EBM) describes 5 steps that lead to a scientifically based solution of clinical problems. EBM is often reduced to the search for the best evidence (randomised controlled trials, meta-analyses) or the introduction of guidelines. This causes criticism and may have consequences for medicine. But EBM also emphasizes the so-called clinical expertise of the doctor, which is important for reference to the individual patient. This point of EBM is being developed further. Connected with the theory of professionalization by U. Oevermann, stressing the importance of hermeneutical competence, EBM turns out to be a sophisticated model of professional medical practice.

  18. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    PubMed Central

    Hauschild, Anne-Christin; Kopczynski, Dominik; D’Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-01-01

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME). We manually generated a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors’ results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications. PMID:24957992

  19. Peak detection method evaluation for ion mobility spectrometry by using machine learning approaches.

    PubMed

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-04-16

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors' results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications.

  20. 76 FR 77558 - 2002 Reopened-Previously Denied Determinations; Notice of Negative Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ... reconsideration investigation revealed that the following workers groups have not met the certification criteria... at tradeact/taa/taa--search--form.cfm under the searchable listing of determinations or by calling...

  1. Facts about Broken Bones

    MedlinePlus

    ... Safe Videos for Educators Search English Español Broken Bones KidsHealth / For Kids / Broken Bones Print en español ... las fracturas de huesos What Is a Broken Bone? A broken bone , also called a fracture (say: ...

  2. Library Computing.

    ERIC Educational Resources Information Center

    Goodgion, Laurel; And Others

    1986-01-01

    Eight articles in special supplement to "Library Journal" and "School Library Journal" cover a computer program called "Byte into Books"; microcomputers and the small library; creating databases with students; online searching with a microcomputer; quality automation software; Meckler Publishing Company's…

  3. Towards PubMed 2.0

    PubMed Central

    Fiorini, Nicolas; Lipman, David J; Lu, Zhiyong

    2017-01-01

    Staff from the National Center for Biotechnology Information in the US describe recent improvements to the PubMed search engine and outline plans for the future, including a new experimental site called PubMed Labs. PMID:29083299

  4. Living with Lupus (For Parents)

    MedlinePlus

    ... Videos for Educators Search English Español Living With Lupus KidsHealth / For Parents / Living With Lupus What's in ... disease for both doctors and their patients. About Lupus A healthy immune system produces proteins called antibodies ...

  5. How Do Asthma Medicines Work?

    MedlinePlus

    ... for Educators Search English Español How Do Asthma Medicines Work? KidsHealth / For Kids / How Do Asthma Medicines ... long-term control medicines . What Are Quick-Relief Medicines? Quick-relief medicines (also called rescue or fast- ...

  6. Kepler Team Marks Five Years in Space

    NASA Image and Video Library

    2014-03-07

    On March 6, 2009, NASA Kepler Space Telescope rocketed into the night skies above Cape Canaveral Air Force Station in Florida to find planets around other stars, called exoplanets, in search of potentially habitable worlds.

  7. High Blood Calcium (Hypercalcemia)

    MedlinePlus

    ... as sarcoidosis • Hormone disorders, such as overactive thyroid (hyperthyroidism) • A genetic condition called familial hypocalciuric hypercalcemia • Kidney ... topics: www.hormone.org (search for PHPT, calcium, hyperthyroidism, or osteoporosis) • MedlinePlus (National Institutes of Health-NIH): ...

  8. How to Safely Give Acetaminophen

    MedlinePlus

    ... Educators Search English Español How to Safely Give Acetaminophen KidsHealth / For Parents / How to Safely Give Acetaminophen ... without getting a doctor's OK first. What Is Acetaminophen Also Called? Acetaminophen is the generic name of ...

  9. COHERENT NETWORK ANALYSIS FOR CONTINUOUS GRAVITATIONAL WAVE SIGNALS IN A PULSAR TIMING ARRAY: PULSAR PHASES AS EXTRINSIC PARAMETERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Mohanty, Soumya D.; Jenet, Fredrick A., E-mail: ywang12@hust.edu.cn

    2015-12-20

    Supermassive black hole binaries are one of the primary targets of gravitational wave (GW) searches using pulsar timing arrays (PTAs). GW signals from such systems are well represented by parameterized models, allowing the standard Generalized Likelihood Ratio Test (GLRT) to be used for their detection and estimation. However, there is a dichotomy in how the GLRT can be implemented for PTAs: there are two possible ways in which one can split the set of signal parameters for semi-analytical and numerical extremization. The straightforward extension of the method used for continuous signals in ground-based GW searches, where the so-called pulsar phasemore » parameters are maximized numerically, was addressed in an earlier paper. In this paper, we report the first study of the performance of the second approach where the pulsar phases are maximized semi-analytically. This approach is scalable since the number of parameters left over for numerical optimization does not depend on the size of the PTA. Our results show that for the same array size (9 pulsars), the new method performs somewhat worse in parameter estimation, but not in detection, than the previous method where the pulsar phases were maximized numerically. The origin of the performance discrepancy is likely to be in the ill-posedness that is intrinsic to any network analysis method. However, the scalability of the new method allows the ill-posedness to be mitigated by simply adding more pulsars to the array. This is shown explicitly by taking a larger array of pulsars.« less

  10. Multiobjective evolutionary algorithm with many tables for purely ab initio protein structure prediction.

    PubMed

    Brasil, Christiane Regina Soares; Delbem, Alexandre Claudio Botazzo; da Silva, Fernando Luís Barroso

    2013-07-30

    This article focuses on the development of an approach for ab initio protein structure prediction (PSP) without using any earlier knowledge from similar protein structures, as fragment-based statistics or inference of secondary structures. Such an approach is called purely ab initio prediction. The article shows that well-designed multiobjective evolutionary algorithms can predict relevant protein structures in a purely ab initio way. One challenge for purely ab initio PSP is the prediction of structures with β-sheets. To work with such proteins, this research has also developed procedures to efficiently estimate hydrogen bond and solvation contribution energies. Considering van der Waals, electrostatic, hydrogen bond, and solvation contribution energies, the PSP is a problem with four energetic terms to be minimized. Each interaction energy term can be considered an objective of an optimization method. Combinatorial problems with four objectives have been considered too complex for the available multiobjective optimization (MOO) methods. The proposed approach, called "Multiobjective evolutionary algorithms with many tables" (MEAMT), can efficiently deal with four objectives through the combination thereof, performing a more adequate sampling of the objective space. Therefore, this method can better map the promising regions in this space, predicting structures in a purely ab initio way. In other words, MEAMT is an efficient optimization method for MOO, which explores simultaneously the search space as well as the objective space. MEAMT can predict structures with one or two domains with RMSDs comparable to values obtained by recently developed ab initio methods (GAPFCG , I-PAES, and Quark) that use different levels of earlier knowledge. Copyright © 2013 Wiley Periodicals, Inc.

  11. Stride search: A general algorithm for storm detection in high-resolution climate data

    DOE PAGES

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; ...

    2016-04-13

    This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less

  12. Simultaneous Genotype Calling and Haplotype Phasing Improves Genotype Accuracy and Reduces False-Positive Associations for Genome-wide Association Studies

    PubMed Central

    Browning, Brian L.; Yu, Zhaoxia

    2009-01-01

    We present a novel method for simultaneous genotype calling and haplotype-phase inference. Our method employs the computationally efficient BEAGLE haplotype-frequency model, which can be applied to large-scale studies with millions of markers and thousands of samples. We compare genotype calls made with our method to genotype calls made with the BIRDSEED, CHIAMO, GenCall, and ILLUMINUS genotype-calling methods, using genotype data from the Illumina 550K and Affymetrix 500K arrays. We show that our method has higher genotype-call accuracy and yields fewer uncalled genotypes than competing methods. We perform single-marker analysis of data from the Wellcome Trust Case Control Consortium bipolar disorder and type 2 diabetes studies. For bipolar disorder, the genotype calls in the original study yield 25 markers with apparent false-positive association with bipolar disorder at a p < 10−7 significance level, whereas genotype calls made with our method yield no associated markers at this significance threshold. Conversely, for markers with replicated association with type 2 diabetes, there is good concordance between genotype calls used in the original study and calls made by our method. Results from single-marker and haplotypic analysis of our method's genotype calls for the bipolar disorder study indicate that our method is highly effective at eliminating genotyping artifacts that cause false-positive associations in genome-wide association studies. Our new genotype-calling methods are implemented in the BEAGLE and BEAGLECALL software packages. PMID:19931040

  13. Image scale measurement with correlation filters in a volume holographic optical correlator

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2013-08-01

    A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.

  14. Global search in photoelectron diffraction structure determination using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Viana, M. L.; Díez Muiño, R.; Soares, E. A.; Van Hove, M. A.; de Carvalho, V. E.

    2007-11-01

    Photoelectron diffraction (PED) is an experimental technique widely used to perform structural determinations of solid surfaces. Similarly to low-energy electron diffraction (LEED), structural determination by PED requires a fitting procedure between the experimental intensities and theoretical results obtained through simulations. Multiple scattering has been shown to be an effective approach for making such simulations. The quality of the fit can be quantified through the so-called R-factor. Therefore, the fitting procedure is, indeed, an R-factor minimization problem. However, the topography of the R-factor as a function of the structural and non-structural surface parameters to be determined is complex, and the task of finding the global minimum becomes tough, particularly for complex structures in which many parameters have to be adjusted. In this work we investigate the applicability of the genetic algorithm (GA) global optimization method to this problem. The GA is based on the evolution of species, and makes use of concepts such as crossover, elitism and mutation to perform the search. We show results of its application in the structural determination of three different systems: the Cu(111) surface through the use of energy-scanned experimental curves; the Ag(110)-c(2 × 2)-Sb system, in which a theory-theory fit was performed; and the Ag(111) surface for which angle-scanned experimental curves were used. We conclude that the GA is a highly efficient method to search for global minima in the optimization of the parameters that best fit the experimental photoelectron diffraction intensities to the theoretical ones.

  15. “Boosting” in Paralympic athletes with spinal cord injury: doping without drugs

    PubMed Central

    Mazzeo, Filomena; Santamaria, Stefania; Iavarone, Alessandro

    2015-01-01

    Summary The intentional activation of autonomic dysreflexia (AD, also called “boosting”), a practice sometimes used by athletes affected by spinal cord injury (SCI), is banned by the International Paralympic Committee (IPC). Although various studies have addressed doping and AD as separate issues, studies evaluating AD as a doping method are lacking. The aim of this brief review is to contribute to better understanding of the relationship between doping and AD. We conducted a literature search of the PubMed database (from 1994 onwards). The key search terms “autonomic dysreflexia” and “boosting” were cross-referenced with “sport performance”. The official Paralympic website was also viewed. AD is a potent sympathetic reflex, due to a massive release of noradrenaline, that results in marked vasoconstriction distal to the level of the lesion. Athletes with SCI often self-inflict physical suffering in order to induce this phenomenon, which carries high health risks (i.e., hypertension, cerebral hemorrhage, stroke and sudden death). Boosting is a practice that can be compared to doping methods and the IPC expressly prohibits it. Any deliberate attempt to induce AD, if detected, will lead to disqualification from the sporting event and subsequent investigation by the IPC Legal and Ethics Committee. PMID:26415788

  16. PSOVina: The hybrid particle swarm optimization algorithm for protein-ligand docking.

    PubMed

    Ng, Marcus C K; Fong, Simon; Siu, Shirley W I

    2015-06-01

    Protein-ligand docking is an essential step in modern drug discovery process. The challenge here is to accurately predict and efficiently optimize the position and orientation of ligands in the binding pocket of a target protein. In this paper, we present a new method called PSOVina which combined the particle swarm optimization (PSO) algorithm with the efficient Broyden-Fletcher-Goldfarb-Shannon (BFGS) local search method adopted in AutoDock Vina to tackle the conformational search problem in docking. Using a diverse data set of 201 protein-ligand complexes from the PDBbind database and a full set of ligands and decoys for four representative targets from the directory of useful decoys (DUD) virtual screening data set, we assessed the docking performance of PSOVina in comparison to the original Vina program. Our results showed that PSOVina achieves a remarkable execution time reduction of 51-60% without compromising the prediction accuracies in the docking and virtual screening experiments. This improvement in time efficiency makes PSOVina a better choice of a docking tool in large-scale protein-ligand docking applications. Our work lays the foundation for the future development of swarm-based algorithms in molecular docking programs. PSOVina is freely available to non-commercial users at http://cbbio.cis.umac.mo .

  17. Science in 60 – Searching for Dark Matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, Andrea

    2016-09-30

    Nearly 14,000 feet up the slopes of Mexico's Sierra Negra volcano, a unique observatory called HAWC (High-Altitude Water Cherenkov Gamma Ray Observatory) is providing insight into some of the most violent phenomena in the known universe, such as supernovae explosions and the evolution of super massive black holes. For Dr. Andrea Albert, the Marie Curie Distinguished Postdoctoral Fellow at Los Alamos National Lab, HAWC provides another distinct opportunity: a way to search for signals from dark matter.

  18. SpaceX TESS Liftoff

    NASA Image and Video Library

    2018-04-18

    A SpaceX Falcon 9 rocket lifts off from Space Launch Complex 40 at Cape Canaveral Air Force Station in Florida, carrying NASA's Transiting Exoplanet Survey Satellite (TESS). Liftoff was at 6:51 p.m. EDT. TESS will search for planets outside of our solar system. The mission will find exoplanets that periodically block part of the light from their host stars, events called transits. The satellite will survey the nearest and brightest stars for two years to search for transiting exoplanets.

  19. Twinkle, Twinkle, Little Laser by Ben Bova

    NASA Astrophysics Data System (ADS)

    Bova, Ben

    2000-03-01

    Radio astronomers have had no success in the search for extraterrestrial intelligence (SETI). Astronomers are now studying the heavens for signals that intelligent beings might send using lasers. Laser lights have the advantage of directionality, monochromaticity, and coherence. This research, called "optical SETI," looks for optical or infrared pulses with detectors that can pick up a broad spectrum of frequencies. By confining the search to stars similar to the Sun, scientists hope to find evidence of life other than ours.

  20. Optimal Sector Sampling for Drive Triage

    DTIC Science & Technology

    2013-06-01

    known files, which we call target data, that could help identify a drive holding evidence such as child pornography or malware. Triage is needed to sift...we call target data, that could help identify a drive holding evidence such as child pornography or malware. Triage is needed to sift through drives...situations where the user is looking for known data.1 One example is a law enforcement officer searching for evidence of child pornography from a large num

  1. A Hybrid Approach to Finding Relevant Social Media Content for Complex Domain Specific Information Needs.

    PubMed

    Cameron, Delroy; Sheth, Amit P; Jaykumar, Nishita; Thirunarayan, Krishnaprasad; Anand, Gaurish; Smith, Gary A

    2014-12-01

    While contemporary semantic search systems offer to improve classical keyword-based search, they are not always adequate for complex domain specific information needs. The domain of prescription drug abuse, for example, requires knowledge of both ontological concepts and "intelligible constructs" not typically modeled in ontologies. These intelligible constructs convey essential information that include notions of intensity, frequency, interval, dosage and sentiments, which could be important to the holistic needs of the information seeker. In this paper, we present a hybrid approach to domain specific information retrieval that integrates ontology-driven query interpretation with synonym-based query expansion and domain specific rules, to facilitate search in social media on prescription drug abuse. Our framework is based on a context-free grammar (CFG) that defines the query language of constructs interpretable by the search system. The grammar provides two levels of semantic interpretation: 1) a top-level CFG that facilitates retrieval of diverse textual patterns, which belong to broad templates and 2) a low-level CFG that enables interpretation of specific expressions belonging to such textual patterns. These low-level expressions occur as concepts from four different categories of data: 1) ontological concepts, 2) concepts in lexicons (such as emotions and sentiments), 3) concepts in lexicons with only partial ontology representation, called lexico-ontology concepts (such as side effects and routes of administration (ROA)), and 4) domain specific expressions (such as date, time, interval, frequency and dosage) derived solely through rules. Our approach is embodied in a novel Semantic Web platform called PREDOSE, which provides search support for complex domain specific information needs in prescription drug abuse epidemiology. When applied to a corpus of over 1 million drug abuse-related web forum posts, our search framework proved effective in retrieving relevant documents when compared with three existing search systems.

  2. A Hybrid Approach to Finding Relevant Social Media Content for Complex Domain Specific Information Needs

    PubMed Central

    Cameron, Delroy; Sheth, Amit P.; Jaykumar, Nishita; Thirunarayan, Krishnaprasad; Anand, Gaurish; Smith, Gary A.

    2015-01-01

    While contemporary semantic search systems offer to improve classical keyword-based search, they are not always adequate for complex domain specific information needs. The domain of prescription drug abuse, for example, requires knowledge of both ontological concepts and “intelligible constructs” not typically modeled in ontologies. These intelligible constructs convey essential information that include notions of intensity, frequency, interval, dosage and sentiments, which could be important to the holistic needs of the information seeker. In this paper, we present a hybrid approach to domain specific information retrieval that integrates ontology-driven query interpretation with synonym-based query expansion and domain specific rules, to facilitate search in social media on prescription drug abuse. Our framework is based on a context-free grammar (CFG) that defines the query language of constructs interpretable by the search system. The grammar provides two levels of semantic interpretation: 1) a top-level CFG that facilitates retrieval of diverse textual patterns, which belong to broad templates and 2) a low-level CFG that enables interpretation of specific expressions belonging to such textual patterns. These low-level expressions occur as concepts from four different categories of data: 1) ontological concepts, 2) concepts in lexicons (such as emotions and sentiments), 3) concepts in lexicons with only partial ontology representation, called lexico-ontology concepts (such as side effects and routes of administration (ROA)), and 4) domain specific expressions (such as date, time, interval, frequency and dosage) derived solely through rules. Our approach is embodied in a novel Semantic Web platform called PREDOSE, which provides search support for complex domain specific information needs in prescription drug abuse epidemiology. When applied to a corpus of over 1 million drug abuse-related web forum posts, our search framework proved effective in retrieving relevant documents when compared with three existing search systems. PMID:25814917

  3. A user-friendly tool for medical-related patent retrieval.

    PubMed

    Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnyakova, Dina; Lovis, Christian; Ruch, Patrick

    2012-01-01

    Health-related information retrieval is complicated by the variety of nomenclatures available to name entities, since different communities of users will use different ways to name a same entity. We present in this report the development and evaluation of a user-friendly interactive Web application aiming at facilitating health-related patent search. Our tool, called TWINC, relies on a search engine tuned during several patent retrieval competitions, enhanced with intelligent interaction modules, such as chemical query, normalization and expansion. While the functionality of related article search showed promising performances, the ad hoc search results in fairly contrasted results. Nonetheless, TWINC performed well during the PatOlympics competition and was appreciated by intellectual property experts. This result should be balanced by the limited evaluation sample. We can also assume that it can be customized to be applied in corporate search environments to process domain and company-specific vocabularies, including non-English literature and patents reports.

  4. 77 FR 12086 - 2002 Reopened-Previously Denied Determinations; Notice of Revised Denied Determinations On...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-28

    ... reconsideration investigation revealed that the following workers groups have met the certification criteria under... at tradeact/taa/taa--search--form.cfm under the searchable listing of determinations or by calling...

  5. 76 FR 81991 - 2002 Reopened-Previously Denied Determinations; Notice of Revised Denied Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-29

    ... reconsideration investigation revealed that the following workers groups have met the certification criteria under...-- search--form.cfm under the searchable listing of determinations or by calling the Office of Trade...

  6. 77 FR 13356 - 2002 Reopened-Previously Denied Determinations; Notice of Revised Denied Determinations On...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-06

    ... reconsideration investigation revealed that the following workers groups have met the certification criteria under... site at tradeact/taa/taa--search--form.cfm under the searchable listing of determinations or by calling...

  7. 76 FR 81991 - 2002 Reopened-Previously Denied Determinations; Notice of Revised Denied Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-29

    ... reconsideration investigation revealed that the following workers groups have met the certification criteria under... site at tradeact/taa/taa--search--form.cfm under the searchable listing of determinations or by calling...

  8. 77 FR 13356 - 2002 Reopened-Previously Denied Determinations; Notice of Negative Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-06

    ... reconsideration investigation revealed that the following workers groups have not met the certification criteria.../taa/taa-- search--form.cfm under the searchable listing of determinations or by calling the Office of...

  9. 77 FR 6592 - 2002 Reopened-Previously Denied Determinations; Notice of Revised Denied Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-08

    ... reconsideration investigation revealed that the following workers groups have met the certification criteria under...-- search--form.cfm under the searchable listing of determinations or by calling the Office of Trade...

  10. 77 FR 9974 - 2002 Reopened-Previously Denied Determinations; Notice of Negative Determinations On...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-21

    ... reconsideration investigation revealed that the following workers groups have not met the certification criteria.../taa--search--form.cfm under the searchable listing of determinations or by calling the Office of Trade...

  11. Tuberculosis

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Tuberculosis KidsHealth / For Teens / Tuberculosis What's in this article? TB Basics Signs and ... When to Call the Doctor Print en español Tuberculosis TB Basics Tuberculosis (also known as "TB") is ...

  12. Disparities in HIV/AIDS, Viral Hepatitis, STDs, and TB

    MedlinePlus

    ... Search The CDC Health Disparities in HIV/AIDS, Viral Hepatitis, STDs, and TB Note: Javascript is disabled or ... Other Pacific Islanders MMWR Publications HIV and AIDS Viral Hepatitis STDs Tuberculosis Training and Networking Resources Call for ...

  13. Irritable Bowel Syndrome

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Irritable Bowel Syndrome KidsHealth / For Teens / Irritable Bowel Syndrome What's in ... intestinal disorder called irritable bowel syndrome. What Is Irritable Bowel Syndrome? Irritable bowel syndrome (IBS) is a common intestinal ...

  14. Great tits search for, capture, kill and eat hibernating bats

    PubMed Central

    Estók, Péter; Zsebők, Sándor; Siemers, Björn M.

    2010-01-01

    Ecological pressure paired with opportunism can lead to surprising innovations in animal behaviour. Here, we report predation of great tits (Parus major) on hibernating pipistrelle bats (Pipistrellus pipistrellus) at a Hungarian cave. Over two winters, we directly observed 18 predation events. The tits specifically and systematically searched for and killed bats for food. A substantial decrease in predation on bats after experimental provisioning of food to the tits further supports the hypothesis that bat-killing serves a foraging purpose in times of food scarcity. We finally conducted a playback experiment to test whether tits would eavesdrop on calls of awakening bats to find them in rock crevices. The tits could clearly hear the calls and were attracted to the loudspeaker. Records for tit predation on bats at this cave now span more than ten years and thus raise the question of whether cultural transmission plays a role for the spread of this foraging innovation. PMID:19740892

  15. Complications of bariatric surgery: presentation and emergency management--a review.

    PubMed

    Monkhouse, S J W; Morgan, J D T; Norton, S A

    2009-05-01

    The prevalence of obesity surgery is increasing rapidly in the UK as demand rises. Consequently, general surgeons on-call may be faced with the complications of such surgery and need to have an understanding about how to manage them, at least initially. Obesity surgery is mainly offered in tertiary centres but patients may present with problems to their local district hospital. This review summarises the main complications that may be encountered. A full literature search was carried out looking at articles published in the last 10 years. Keywords for search purposes included bariatric, surgery, complications, emergency and management. Complications of bariatric surgery have been extensively written about but never in a format that is designed to aid the on-call surgeon. The intricate details and rare complications have been excluded to concentrate on those symptoms and signs that are likely to be encountered by the emergency team.

  16. Efficient Generation of Dancing Animation Synchronizing with Music Based on Meta Motion Graphs

    NASA Astrophysics Data System (ADS)

    Xu, Jianfeng; Takagi, Koichi; Sakazawa, Shigeyuki

    This paper presents a system for automatic generation of dancing animation that is synchronized with a piece of music by re-using motion capture data. Basically, the dancing motion is synthesized according to the rhythm and intensity features of music. For this purpose, we propose a novel meta motion graph structure to embed the necessary features including both rhythm and intensity, which is constructed on the motion capture database beforehand. In this paper, we consider two scenarios for non-streaming music and streaming music, where global search and local search are required respectively. In the case of the former, once a piece of music is input, the efficient dynamic programming algorithm can be employed to globally search a best path in the meta motion graph, where an objective function is properly designed by measuring the quality of beat synchronization, intensity matching, and motion smoothness. In the case of the latter, the input music is stored in a buffer in a streaming mode, then an efficient search method is presented for a certain amount of music data (called a segment) in the buffer with the same objective function, resulting in a segment-based search approach. For streaming applications, we define an additional property in the above meta motion graph to deal with the unpredictable future music, which guarantees that there is some motion to match the unknown remaining music. A user study with totally 60 subjects demonstrates that our system outperforms the stat-of-the-art techniques in both scenarios. Furthermore, our system improves the synthesis speed greatly (maximal speedup is more than 500 times), which is essential for mobile applications. We have implemented our system on commercially available smart phones and confirmed that it works well on these mobile phones.

  17. Improved Spectroscopy of Molecular Ions in the Mid-Infrared with Up-Conversion Detection

    NASA Astrophysics Data System (ADS)

    Markus, Charles R.; Perry, Adam J.; Hodges, James N.; McCall, Benjamin J.

    2016-06-01

    Heterodyne detection, velocity modulation, and cavity enhancement are useful tools for observing rovibrational transitions of important molecular ions. We have utilized these methods to investigate a number of molecular ions, such as H_3^+, CH_5^+, HeH^+, and OH^+. In the past, parasitic etalons and the lack of fast and sensitive detectors in the mid-infrared have limited the number of transitions we could measure with MHz-level precision. Recently, we have significantly reduced the amplitude of unwanted interference fringes with a Brewster-plate spoiler. We have also developed a detection scheme which up-converts the mid-infrared light with difference frequency generation which allows the use of a faster and more sensitive avalanche photodetector. The higher detection bandwidth allows for optimized heterodyne detection at higher modulation frequencies. The overall gain in signal-to-noise from both improvements will enable extensive high-precision line lists of molecular ions and searches for previously unobserved transitions. K.N. Crabtree, J.N. Hodges, B.M. Siller, A.J. Perry, J.E. Kelly, P.A. Jenkins II, and B.J. McCall, Chem. Phys. Lett. 551 (2012) 1-6. A.J. Perry, J.N. Hodges, C.R. Markus, G.S. Kocheril, and B.J. McCall, J. Mol. Spec. 317 (2015) 71-73. J.N. Hodges, A.J. Perry, P.A. Jenkins II, B.M. Siller, and B.J. McCall, J. Chem. Phys. 139 (2013) 164291. A.J. Perry, J.N. Hodges, C.R. Markus, G.S. Kocheril, and B.J. McCall. 2014, J. Chem. Phys. 141, 101101 C.R. Markus, J.N. Hodges, A.J. Perry, G.S. Kocheril, H.S.P. Muller, and B.J. McCall, Astrophys. J. 817 (2016) 138.

  18. Restricted random search method based on taboo search in the multiple minima problem

    NASA Astrophysics Data System (ADS)

    Hong, Seung Do; Jhon, Mu Shik

    1997-03-01

    The restricted random search method is proposed as a simple Monte Carlo sampling method to search minima fast in the multiple minima problem. This method is based on taboo search applied recently to continuous test functions. The concept of the taboo region instead of the taboo list is used and therefore the sampling of a region near an old configuration is restricted in this method. This method is applied to 2-dimensional test functions and the argon clusters. This method is found to be a practical and efficient method to search near-global configurations of test functions and the argon clusters.

  19. Improving sensitivity in proteome studies by analysis of false discovery rates for multiple search engines.

    PubMed

    Jones, Andrew R; Siepen, Jennifer A; Hubbard, Simon J; Paton, Norman W

    2009-03-01

    LC-MS experiments can generate large quantities of data, for which a variety of database search engines are available to make peptide and protein identifications. Decoy databases are becoming widely used to place statistical confidence in result sets, allowing the false discovery rate (FDR) to be estimated. Different search engines produce different identification sets so employing more than one search engine could result in an increased number of peptides (and proteins) being identified, if an appropriate mechanism for combining data can be defined. We have developed a search engine independent score, based on FDR, which allows peptide identifications from different search engines to be combined, called the FDR Score. The results demonstrate that the observed FDR is significantly different when analysing the set of identifications made by all three search engines, by each pair of search engines or by a single search engine. Our algorithm assigns identifications to groups according to the set of search engines that have made the identification, and re-assigns the score (combined FDR Score). The combined FDR Score can differentiate between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine.

  20. Attribute-Based Proxy Re-Encryption with Keyword Search

    PubMed Central

    Shi, Yanfeng; Liu, Jiqiang; Han, Zhen; Zheng, Qingji; Zhang, Rui; Qiu, Shuo

    2014-01-01

    Keyword search on encrypted data allows one to issue the search token and conduct search operations on encrypted data while still preserving keyword privacy. In the present paper, we consider the keyword search problem further and introduce a novel notion called attribute-based proxy re-encryption with keyword search (), which introduces a promising feature: In addition to supporting keyword search on encrypted data, it enables data owners to delegate the keyword search capability to some other data users complying with the specific access control policy. To be specific, allows (i) the data owner to outsource his encrypted data to the cloud and then ask the cloud to conduct keyword search on outsourced encrypted data with the given search token, and (ii) the data owner to delegate other data users keyword search capability in the fine-grained access control manner through allowing the cloud to re-encrypted stored encrypted data with a re-encrypted data (embedding with some form of access control policy). We formalize the syntax and security definitions for , and propose two concrete constructions for : key-policy and ciphertext-policy . In the nutshell, our constructions can be treated as the integration of technologies in the fields of attribute-based cryptography and proxy re-encryption cryptography. PMID:25549257

  1. Attribute-based proxy re-encryption with keyword search.

    PubMed

    Shi, Yanfeng; Liu, Jiqiang; Han, Zhen; Zheng, Qingji; Zhang, Rui; Qiu, Shuo

    2014-01-01

    Keyword search on encrypted data allows one to issue the search token and conduct search operations on encrypted data while still preserving keyword privacy. In the present paper, we consider the keyword search problem further and introduce a novel notion called attribute-based proxy re-encryption with keyword search (ABRKS), which introduces a promising feature: In addition to supporting keyword search on encrypted data, it enables data owners to delegate the keyword search capability to some other data users complying with the specific access control policy. To be specific, ABRKS allows (i) the data owner to outsource his encrypted data to the cloud and then ask the cloud to conduct keyword search on outsourced encrypted data with the given search token, and (ii) the data owner to delegate other data users keyword search capability in the fine-grained access control manner through allowing the cloud to re-encrypted stored encrypted data with a re-encrypted data (embedding with some form of access control policy). We formalize the syntax and security definitions for ABRKS, and propose two concrete constructions for ABRKS: key-policy ABRKS and ciphertext-policy ABRKS. In the nutshell, our constructions can be treated as the integration of technologies in the fields of attribute-based cryptography and proxy re-encryption cryptography.

  2. Chemical-text hybrid search engines.

    PubMed

    Zhou, Yingyao; Zhou, Bin; Jiang, Shumei; King, Frederick J

    2010-01-01

    As the amount of chemical literature increases, it is critical that researchers be enabled to accurately locate documents related to a particular aspect of a given compound. Existing solutions, based on text and chemical search engines alone, suffer from the inclusion of "false negative" and "false positive" results, and cannot accommodate diverse repertoire of formats currently available for chemical documents. To address these concerns, we developed an approach called Entity-Canonical Keyword Indexing (ECKI), which converts a chemical entity embedded in a data source into its canonical keyword representation prior to being indexed by text search engines. We implemented ECKI using Microsoft Office SharePoint Server Search, and the resultant hybrid search engine not only supported complex mixed chemical and keyword queries but also was applied to both intranet and Internet environments. We envision that the adoption of ECKI will empower researchers to pose more complex search questions that were not readily attainable previously and to obtain answers at much improved speed and accuracy.

  3. An analytical study of composite laminate lay-up using search algorithms for maximization of flexural stiffness and minimization of springback angle

    NASA Astrophysics Data System (ADS)

    Singh, Ranjan Kumar; Rinawa, Moti Lal

    2018-04-01

    The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.

  4. Toward building a comprehensive data mart

    NASA Astrophysics Data System (ADS)

    Boulware, Douglas; Salerno, John; Bleich, Richard; Hinman, Michael L.

    2004-04-01

    To uncover new relationships or patterns one must first build a corpus of data or what some call a data mart. How can we make sure we have collected all the pertinent data and have maximized coverage? There are hundreds of search engines that are available for use on the Internet today. Which one is best? Is one better for one problem and a second better for another? Are meta-search engines better than individual search engines? In this paper we look at one possible approach in developing a methodology to compare a number of search engines. Before we present this methodology, we first provide our motivation towards the need for increased coverage. We next investigate how we can obtain ground truth and what the ground truth can provide us in the way of some insight into the Internet and search engine capabilities. We then conclude our discussion by developing a methodology in which we compare a number of the search engines and how we can increase overall coverage and thus a more comprehensive data mart.

  5. Starry Messages - Searching for Signatures of Interstellar Archaeology

    NASA Astrophysics Data System (ADS)

    Carrigan, R. A., Jr.

    Searching for signatures of cosmic-scale archaeological artefacts such as Dyson spheres or Kardashev civilizations is an interesting alternative to conventional SETI. Uncovering such an artifact does not require the intentional transmission of a signal on the part of the originating civilization. This type of search is called interstellar archaeology or sometimes cosmic archaeology . The detection of intelligence elsewhere in the Universe with interstellar archaeology or SETI would have broad implications for science. For example, the constraints of the anthropic principle would have to be loosened if a different type of intelligence was discovered elsewhere. A variety of interstellar archaeology signatures are discussed including non-natural planetary atmospheric constituents, stellar doping with isotopes of nuclear wastes, Dyson spheres, as well as signatures of stellar and galactic-scale engineering. The concept of a Fermi bubble due to interstellar migration is introduced in the discussion of galactic signatures. These potential interstellar archaeological signatures are classified using the Kardashev scale. A modified Drake equation is used to evaluate the relative challenges of finding various sources. With few exceptions interstellar archaeological signatures are clouded and beyond current technological capabilities. However SETI for so-called cultural transmissions and planetary atmosphere signatures are within reach.

  6. Starry messages: Searching for signatures of interstellar archaeology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrigan, Richard A., Jr.; /Fermilab

    2009-12-01

    Searching for signatures of cosmic-scale archaeological artifacts such as Dyson spheres or Kardashev civilizations is an interesting alternative to conventional SETI. Uncovering such an artifact does not require the intentional transmission of a signal on the part of the original civilization. This type of search is called interstellar archaeology or sometimes cosmic archaeology. The detection of intelligence elsewhere in the Universe with interstellar archaeology or SETI would have broad implications for science. For example, the constraints of the anthropic principle would have to be loosened if a different type of intelligence was discovered elsewhere. A variety of interstellar archaeology signaturesmore » are discussed including non-natural planetary atmospheric constituents, stellar doping with isotopes of nuclear wastes, Dyson spheres, as well as signatures of stellar and galactic-scale engineering. The concept of a Fermi bubble due to interstellar migration is introduced in the discussion of galactic signatures. These potential interstellar archaeological signatures are classified using the Kardashev scale. A modified Drake equation is used to evaluate the relative challenges of finding various sources. With few exceptions interstellar archaeological signatures are clouded and beyond current technological capabilities. However SETI for so-called cultural transmissions and planetary atmosphere signatures are within reach.« less

  7. Examining A Health Care Price Transparency Tool: Who Uses It, And How They Shop For Care.

    PubMed

    Sinaiko, Anna D; Rosenthal, Meredith B

    2016-04-01

    Calls for transparency in health care prices are increasing, in an effort to encourage and enable patients to make value-based decisions. Yet there is very little evidence of whether and how patients use health care price transparency tools. We evaluated the experiences, in the period 2011-12, of an insured population of nonelderly adults with Aetna's Member Payment Estimator, a web-based tool that provides real-time, personalized, episode-level price estimates. Overall, use of the tool increased during the study period but remained low. Nonetheless, for some procedures the number of people searching for prices of services (called searchers) was high relative to the number of people who received the service (called patients). Among Aetna patients who had an imaging service, childbirth, or one of several outpatient procedures, searchers for price information were significantly more likely to be younger and healthier and to have incurred higher annual deductible spending than patients who did not search for price information. A campaign to deliver price information to consumers may be important to increase patients' engagement with price transparency tools. Project HOPE—The People-to-People Health Foundation, Inc.

  8. Searching the ASRS Database Using QUORUM Keyword Search, Phrase Search, Phrase Generation, and Phrase Discovery

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W.; Connors, Mary M. (Technical Monitor)

    2001-01-01

    To support Search Requests and Quick Responses at the Aviation Safety Reporting System (ASRS), four new QUORUM methods have been developed: keyword search, phrase search, phrase generation, and phrase discovery. These methods build upon the core QUORUM methods of text analysis, modeling, and relevance-ranking. QUORUM keyword search retrieves ASRS incident narratives that contain one or more user-specified keywords in typical or selected contexts, and ranks the narratives on their relevance to the keywords in context. QUORUM phrase search retrieves narratives that contain one or more user-specified phrases, and ranks the narratives on their relevance to the phrases. QUORUM phrase generation produces a list of phrases from the ASRS database that contain a user-specified word or phrase. QUORUM phrase discovery finds phrases that are related to topics of interest. Phrase generation and phrase discovery are particularly useful for finding query phrases for input to QUORUM phrase search. The presentation of the new QUORUM methods includes: a brief review of the underlying core QUORUM methods; an overview of the new methods; numerous, concrete examples of ASRS database searches using the new methods; discussion of related methods; and, in the appendices, detailed descriptions of the new methods.

  9. Identification of Anisotropic Criteria for Stratified Soil Based on Triaxial Tests Results

    NASA Astrophysics Data System (ADS)

    Tankiewicz, Matylda; Kawa, Marek

    2017-09-01

    The paper presents the identification methodology of anisotropic criteria based on triaxial test results. The considered material is varved clay - a sedimentary soil occurring in central Poland which is characterized by the so-called "layered microstructure". The strength examination outcomes were identified by standard triaxial tests. The results include the estimated peak strength obtained for a wide range of orientations and confining pressures. Two models were chosen as potentially adequate for the description of the tested material, namely Pariseau and its conjunction with the Jaeger weakness plane. Material constants were obtained by fitting the model to the experimental results. The identification procedure is based on the least squares method. The optimal values of parameters are searched for between specified bounds by sequentially decreasing the distance between points and reducing the length of the searched range. For both considered models the optimal parameters have been obtained. The comparison of theoretical and experimental results as well as the assessment of the suitability of selected criteria for the specified range of confining pressures are presented.

  10. Current state of purification, isolation and analysis of bacteriocins produced by lactic acid bacteria.

    PubMed

    Kaškonienė, Vilma; Stankevičius, Mantas; Bimbiraitė-Survilienė, Kristina; Naujokaitytė, Gintarė; Šernienė, Loreta; Mulkytė, Kristina; Malakauskas, Mindaugas; Maruška, Audrius

    2017-02-01

    The scientific interest for the search of natural means of microbial inhibitors has not faded for several years. A search of natural antibiotics, so-called bacteriocins which are produced by lactic acid bacteria (LAB), gains a huge attention of the scientists in the last century, in order to reduce the usage of synthetic food additives. Pure bacteriocins with wide spectra of antibacterial activity are promising among the natural biopreservatives. The usage of bacteriocin(s) producing LAB as starter culture for the fermentation of some food products, in order to increase their shelf-life, when synthetic preservatives are not allowable, is also possible. There are a lot of studies focusing on the isolation of new bacteriocins from traditional fermented food, dairy products and other foods or sometimes even from unusual non-food matrices. Bacteriocins producing bacteria have been isolated from different sources with the different antibacterial activity against food-borne microorganisms. This review covers the classification of bacteriocins, diversity of sources of bacteriocin(s) producing LAB, antibacterial spectra of isolated bacteriocins and analytical methods for the bacteriocin purification and analysis within the last 15 years.

  11. Adaptive building skin structures

    NASA Astrophysics Data System (ADS)

    Del Grosso, A. E.; Basso, P.

    2010-12-01

    The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Jeff; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109; Cornish, Neil J.

    Low frequency gravitational wave detectors, such as the Laser Interferometer Space Antenna (LISA), will have to contend with large foregrounds produced by millions of compact galactic binaries in our galaxy. While these galactic signals are interesting in their own right, the unresolved component can obscure other sources. The science yield for the LISA mission can be improved if the brighter and more isolated foreground sources can be identified and regressed from the data. Since the signals overlap with one another, we are faced with a 'cocktail party' problem of picking out individual conversations in a crowded room. Here we presentmore » and implement an end-to-end solution to the galactic foreground problem that is able to resolve tens of thousands of sources from across the LISA band. Our algorithm employs a variant of the Markov chain Monte Carlo (MCMC) method, which we call the blocked annealed Metropolis-Hastings (BAM) algorithm. Following a description of the algorithm and its implementation, we give several examples ranging from searches for a single source to searches for hundreds of overlapping sources. Our examples include data sets from the first round of mock LISA data challenges.« less

  13. Positive consequences of SETI before detection

    NASA Astrophysics Data System (ADS)

    Tough, A.

    Even before a signal is detected, six positive consequences will result from the scientific search for extraterrestrial intelligence, usually called SETI. (1) Humanity's self-image: SETI has enlarged our view of ourselves and enhanced our sense of meaning. Increasingly, we feel a kinship with the civilizations whose signals we are trying to detect. (2) A fresh perspective: SETI forces us to think about how extraterrestrials might perceive us. This gives us a fresh perspective on our society's values, priorities, laws and foibles. (3) Questions: SETI is stimulating thought and discussion about several fundamental questions. (4) Education: some broad-gage educational programs have already been centered around SETI. (5) Tangible spin-offs: in addition to providing jobs for some people, SETI provides various spin-offs, such as search methods, computer software, data, and international scientific cooperation. (6) Future scenarios: SETI will increasingly stimulate us to think carefully about possible detection scenarios and their consequences, about our reply, and generally about the role of extraterrestrial communication in our long-term future. Such thinking leads, in turn, to fresh perspectives on the SETI enterprise itself.

  14. Science in 60 – Searching for Dark Matter

    ScienceCinema

    Albert, Andrea

    2018-06-12

    Nearly 14,000 feet up the slopes of Mexico's Sierra Negra volcano, a unique observatory called HAWC (High-Altitude Water Cherenkov Gamma Ray Observatory) is providing insight into some of the most violent phenomena in the known universe, such as supernovae explosions and the evolution of super massive black holes. For Dr. Andrea Albert, the Marie Curie Distinguished Postdoctoral Fellow at Los Alamos National Lab, HAWC provides another distinct opportunity: a way to search for signals from dark matter.

  15. Searching for the Cases of Acute Organophosphorus Pesticides Poisoning by JOIS

    NASA Astrophysics Data System (ADS)

    Futagami, Kojiro; Fujii, Toshiyuki; Horioka, Masayoshi; Asakura, Hajime; Fukagawa, Mitsuro

    Cholinesterase reactivator PAM (Pralidoxime) is used in the treatment of organophosphates poisoning with anticholinergic agent atropine. However, some reports demonstrated recently that PAM has inefficacy in some cases of so-called low toxicity organophosphates poisoning. So, to atempt to discuss the efficacy of PAM in clinical treatment, we searched for the case reports of these poisoning by JOIS. In this time, we compared with the specificity of each data bases and presented some examples in this on-line information retrieval.

  16. SpaceX TESS Liftoff

    NASA Image and Video Library

    2018-04-18

    A SpaceX Falcon 9 rocket soars upward after lifting off from Space Launch Complex 40 at Cape Canaveral Air Force Station in Florida, carrying NASA's Transiting Exoplanet Survey Satellite (TESS). Liftoff was at 6:51 p.m. EDT. TESS will search for planets outside of our solar system. The mission will find exoplanets that periodically block part of the light from their host stars, events called transits. The satellite will survey the nearest and brightest stars for two years to search for transiting exoplanets.

  17. Improving the Capacity of Language Recognition Systems to Handle Rare Languages Using Radio Broadcast Data

    DTIC Science & Technology

    2011-01-01

    Training databases for LRE2007 and LRE2009 systems CF CallFriend CH CallHome F Fisher English Part 1 .and 2. F Fisher Levantine Arabic F HKUST Mandarin...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM

  18. Reflective random indexing for semi-automatic indexing of the biomedical literature.

    PubMed

    Vasuki, Vidya; Cohen, Trevor

    2010-10-01

    The rapid growth of biomedical literature is evident in the increasing size of the MEDLINE research database. Medical Subject Headings (MeSH), a controlled set of keywords, are used to index all the citations contained in the database to facilitate search and retrieval. This volume of citations calls for efficient tools to assist indexers at the US National Library of Medicine (NLM). Currently, the Medical Text Indexer (MTI) system provides assistance by recommending MeSH terms based on the title and abstract of an article using a combination of distributional and vocabulary-based methods. In this paper, we evaluate a novel approach toward indexer assistance by using nearest neighbor classification in combination with Reflective Random Indexing (RRI), a scalable alternative to the established methods of distributional semantics. On a test set provided by the NLM, our approach significantly outperforms the MTI system, suggesting that the RRI approach would make a useful addition to the current methodologies.

  19. MapReduce implementation of a hybrid spectral library-database search method for large-scale peptide identification.

    PubMed

    Kalyanaraman, Ananth; Cannon, William R; Latt, Benjamin; Baxter, Douglas J

    2011-11-01

    A MapReduce-based implementation called MR-MSPolygraph for parallelizing peptide identification from mass spectrometry data is presented. The underlying serial method, MSPolygraph, uses a novel hybrid approach to match an experimental spectrum against a combination of a protein sequence database and a spectral library. Our MapReduce implementation can run on any Hadoop cluster environment. Experimental results demonstrate that, relative to the serial version, MR-MSPolygraph reduces the time to solution from weeks to hours, for processing tens of thousands of experimental spectra. Speedup and other related performance studies are also reported on a 400-core Hadoop cluster using spectral datasets from environmental microbial communities as inputs. The source code along with user documentation are available on http://compbio.eecs.wsu.edu/MR-MSPolygraph. ananth@eecs.wsu.edu; william.cannon@pnnl.gov. Supplementary data are available at Bioinformatics online.

  20. Infrared small target detection based on multiscale center-surround contrast measure

    NASA Astrophysics Data System (ADS)

    Fu, Hao; Long, Yunli; Zhu, Ran; An, Wei

    2018-04-01

    Infrared(IR) small target detection plays a critical role in the Infrared Search And Track (IRST) system. Although it has been studied for years, there are some difficulties remained to the clutter environment. According to the principle of human discrimination of small targets from a natural scene that there is a signature of discontinuity between the object and its neighboring regions, we develop an efficient method for infrared small target detection called multiscale centersurround contrast measure (MCSCM). First, to determine the maximum neighboring window size, an entropy-based window selection technique is used. Then, we construct a novel multiscale center-surround contrast measure to calculate the saliency map. Compared with the original image, the MCSCM map has less background clutters and noise residual. Subsequently, a simple threshold is used to segment the target. Experimental results show our method achieves better performance.

  1. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  2. Similarity recognition of online data curves based on dynamic spatial time warping for the estimation of lithium-ion battery capacity

    NASA Astrophysics Data System (ADS)

    Tao, Laifa; Lu, Chen; Noktehdan, Azadeh

    2015-10-01

    Battery capacity estimation is a significant recent challenge given the complex physical and chemical processes that occur within batteries and the restrictions on the accessibility of capacity degradation data. In this study, we describe an approach called dynamic spatial time warping, which is used to determine the similarities of two arbitrary curves. Unlike classical dynamic time warping methods, this approach can maintain the invariance of curve similarity to the rotations and translations of curves, which is vital in curve similarity search. Moreover, it utilizes the online charging or discharging data that are easily collected and do not require special assumptions. The accuracy of this approach is verified using NASA battery datasets. Results suggest that the proposed approach provides a highly accurate means of estimating battery capacity at less time cost than traditional dynamic time warping methods do for different individuals and under various operating conditions.

  3. A bi-objective model for robust yard allocation scheduling for outbound containers

    NASA Astrophysics Data System (ADS)

    Liu, Changchun; Zhang, Canrong; Zheng, Li

    2017-01-01

    This article examines the yard allocation problem for outbound containers, with consideration of uncertainty factors, mainly including the arrival and operation time of calling vessels. Based on the time buffer inserting method, a bi-objective model is constructed to minimize the total operational cost and to maximize the robustness of fighting against the uncertainty. Due to the NP-hardness of the constructed model, a two-stage heuristic is developed to solve the problem. In the first stage, initial solutions are obtained by a greedy algorithm that looks n-steps ahead with the uncertainty factors set as their respective expected values; in the second stage, based on the solutions obtained in the first stage and with consideration of uncertainty factors, a neighbourhood search heuristic is employed to generate robust solutions that can fight better against the fluctuation of uncertainty factors. Finally, extensive numerical experiments are conducted to test the performance of the proposed method.

  4. Solving standard traveling salesman problem and multiple traveling salesman problem by using branch-and-bound

    NASA Astrophysics Data System (ADS)

    Saad, Shakila; Wan Jaafar, Wan Nurhadani; Jamil, Siti Jasmida

    2013-04-01

    The standard Traveling Salesman Problem (TSP) is the classical Traveling Salesman Problem (TSP) while Multiple Traveling Salesman Problem (MTSP) is an extension of TSP when more than one salesman is involved. The objective of MTSP is to find the least costly route that the traveling salesman problem can take if he wishes to visit exactly once each of a list of n cities and then return back to the home city. There are a few methods that can be used to solve MTSP. The objective of this research is to implement an exact method called Branch-and-Bound (B&B) algorithm. Briefly, the idea of B&B algorithm is to start with the associated Assignment Problem (AP). A branching strategy will be applied to the TSP and MTSP which is Breadth-first-Search (BFS). 11 nodes of cities are implemented for both problem and the solutions to the problem are presented.

  5. L1000CDS2: LINCS L1000 characteristic direction signatures search engine

    PubMed Central

    Duan, Qiaonan; Reid, St Patrick; Clark, Neil R; Wang, Zichen; Fernandez, Nicolas F; Rouillard, Andrew D; Readhead, Ben; Tritsch, Sarah R; Hodos, Rachel; Hafner, Marc; Niepel, Mario; Sorger, Peter K; Dudley, Joel T; Bavari, Sina; Panchal, Rekha G; Ma’ayan, Avi

    2016-01-01

    The library of integrated network-based cellular signatures (LINCS) L1000 data set currently comprises of over a million gene expression profiles of chemically perturbed human cell lines. Through unique several intrinsic and extrinsic benchmarking schemes, we demonstrate that processing the L1000 data with the characteristic direction (CD) method significantly improves signal to noise compared with the MODZ method currently used to compute L1000 signatures. The CD processed L1000 signatures are served through a state-of-the-art web-based search engine application called L1000CDS2. The L1000CDS2 search engine provides prioritization of thousands of small-molecule signatures, and their pairwise combinations, predicted to either mimic or reverse an input gene expression signature using two methods. The L1000CDS2 search engine also predicts drug targets for all the small molecules profiled by the L1000 assay that we processed. Targets are predicted by computing the cosine similarity between the L1000 small-molecule signatures and a large collection of signatures extracted from the gene expression omnibus (GEO) for single-gene perturbations in mammalian cells. We applied L1000CDS2 to prioritize small molecules that are predicted to reverse expression in 670 disease signatures also extracted from GEO, and prioritized small molecules that can mimic expression of 22 endogenous ligand signatures profiled by the L1000 assay. As a case study, to further demonstrate the utility of L1000CDS2, we collected expression signatures from human cells infected with Ebola virus at 30, 60 and 120 min. Querying these signatures with L1000CDS2 we identified kenpaullone, a GSK3B/CDK2 inhibitor that we show, in subsequent experiments, has a dose-dependent efficacy in inhibiting Ebola infection in vitro without causing cellular toxicity in human cell lines. In summary, the L1000CDS2 tool can be applied in many biological and biomedical settings, while improving the extraction of knowledge from the LINCS L1000 resource. PMID:28413689

  6. 3D Protein structure prediction with genetic tabu search algorithm

    PubMed Central

    2010-01-01

    Background Protein structure prediction (PSP) has important applications in different fields, such as drug design, disease prediction, and so on. In protein structure prediction, there are two important issues. The first one is the design of the structure model and the second one is the design of the optimization technology. Because of the complexity of the realistic protein structure, the structure model adopted in this paper is a simplified model, which is called off-lattice AB model. After the structure model is assumed, optimization technology is needed for searching the best conformation of a protein sequence based on the assumed structure model. However, PSP is an NP-hard problem even if the simplest model is assumed. Thus, many algorithms have been developed to solve the global optimization problem. In this paper, a hybrid algorithm, which combines genetic algorithm (GA) and tabu search (TS) algorithm, is developed to complete this task. Results In order to develop an efficient optimization algorithm, several improved strategies are developed for the proposed genetic tabu search algorithm. The combined use of these strategies can improve the efficiency of the algorithm. In these strategies, tabu search introduced into the crossover and mutation operators can improve the local search capability, the adoption of variable population size strategy can maintain the diversity of the population, and the ranking selection strategy can improve the possibility of an individual with low energy value entering into next generation. Experiments are performed with Fibonacci sequences and real protein sequences. Experimental results show that the lowest energy obtained by the proposed GATS algorithm is lower than that obtained by previous methods. Conclusions The hybrid algorithm has the advantages from both genetic algorithm and tabu search algorithm. It makes use of the advantage of multiple search points in genetic algorithm, and can overcome poor hill-climbing capability in the conventional genetic algorithm by using the flexible memory functions of TS. Compared with some previous algorithms, GATS algorithm has better performance in global optimization and can predict 3D protein structure more effectively. PMID:20522256

  7. Modal Decomposition of TTV: Inferring Planet Masses and Eccentricities

    NASA Astrophysics Data System (ADS)

    Linial, Itai; Gilbaum, Shmuel; Sari, Re’em

    2018-06-01

    Transit timing variations (TTVs) are a powerful tool for characterizing the properties of transiting exoplanets. However, inferring planet properties from the observed timing variations is a challenging task, which is usually addressed by extensive numerical searches. We propose a new, computationally inexpensive method for inverting TTV signals in a planetary system of two transiting planets. To the lowest order in planetary masses and eccentricities, TTVs can be expressed as a linear combination of three functions, which we call the TTV modes. These functions depend only on the planets’ linear ephemerides, and can be either constructed analytically, or by performing three orbital integrations of the three-body system. Given a TTV signal, the underlying physical parameters are found by decomposing the data as a sum of the TTV modes. We demonstrate the use of this method by inferring the mass and eccentricity of six Kepler planets that were previously characterized in other studies. Finally we discuss the implications and future prospects of our new method.

  8. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  9. Multimedia content analysis, management and retrieval: trends and challenges

    NASA Astrophysics Data System (ADS)

    Hanjalic, Alan; Sebe, Nicu; Chang, Edward

    2006-01-01

    Recent advances in computing, communications and storage technology have made multimedia data become prevalent. Multimedia has gained enormous potential in improving the processes in a wide range of fields, such as advertising and marketing, education and training, entertainment, medicine, surveillance, wearable computing, biometrics, and remote sensing. Rich content of multimedia data, built through the synergies of the information contained in different modalities, calls for new and innovative methods for modeling, processing, mining, organizing, and indexing of this data for effective and efficient searching, retrieval, delivery, management and sharing of multimedia content, as required by the applications in the abovementioned fields. The objective of this paper is to present our views on the trends that should be followed when developing such methods, to elaborate on the related research challenges, and to introduce the new conference, Multimedia Content Analysis, Management and Retrieval, as a premium venue for presenting and discussing these methods with the scientific community. Starting from 2006, the conference will be held annually as a part of the IS&T/SPIE Electronic Imaging event.

  10. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2014-02-01

    This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.

  11. Study on probability distributions for evolution in modified extremal optimization

    NASA Astrophysics Data System (ADS)

    Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian

    2010-05-01

    It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.

  12. Concept similarity and related categories in information retrieval using formal concept analysis

    NASA Astrophysics Data System (ADS)

    Eklund, P.; Ducrou, J.; Dau, F.

    2012-11-01

    The application of formal concept analysis to the problem of information retrieval has been shown useful but has lacked any real analysis of the idea of relevance ranking of search results. SearchSleuth is a program developed to experiment with the automated local analysis of Web search using formal concept analysis. SearchSleuth extends a standard search interface to include a conceptual neighbourhood centred on a formal concept derived from the initial query. This neighbourhood of the concept derived from the search terms is decorated with its upper and lower neighbours representing more general and special concepts, respectively. SearchSleuth is in many ways an archetype of search engines based on formal concept analysis with some novel features. In SearchSleuth, the notion of related categories - which are themselves formal concepts - is also introduced. This allows the retrieval focus to shift to a new formal concept called a sibling. This movement across the concept lattice needs to relate one formal concept to another in a principled way. This paper presents the issues concerning exploring, searching, and ordering the space of related categories. The focus is on understanding the use and meaning of proximity and semantic distance in the context of information retrieval using formal concept analysis.

  13. Detecting submerged objects: the application of side scan sonar to forensic contexts.

    PubMed

    Schultz, John J; Healy, Carrie A; Parker, Kenneth; Lowers, Bim

    2013-09-10

    Forensic personnel must deal with numerous challenges when searching for submerged objects. While traditional water search methods have generally involved using dive teams, remotely operated vehicles (ROVs), and water scent dogs for cases involving submerged objects and bodies, law enforcement is increasingly integrating multiple methods that include geophysical technologies. There are numerous advantages for integrating geophysical technologies, such as side scan sonar and ground penetrating radar (GPR), with more traditional search methods. Overall, these methods decrease the time involved searching, in addition to increasing area searched. However, as with other search methods, there are advantages and disadvantages when using each method. For example, in instances with excessive aquatic vegetation or irregular bottom terrain, it may not be possible to discern a submersed body with side scan sonar. As a result, forensic personnel will have the highest rate of success during searches for submerged objects when integrating multiple search methods, including deploying multiple geophysical technologies. The goal of this paper is to discuss the methodology of various search methods that are employed for submerged objects and how these various methods can be integrated as part of a comprehensive protocol for water searches depending upon the type of underwater terrain. In addition, two successful case studies involving the search and recovery of a submerged human body using side scan sonar are presented to illustrate the successful application of integrating a geophysical technology with divers when searching for a submerged object. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. A patch-based pseudo-CT approach for MRI-only radiotherapy in the pelvis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreasen, Daniel, E-mail: dana@dtu.dk

    Purpose: In radiotherapy based only on magnetic resonance imaging (MRI), knowledge about tissue electron densities must be derived from the MRI. This can be achieved by converting the MRI scan to the so-called pseudo-computed tomography (pCT). An obstacle is that the voxel intensities in conventional MRI scans are not uniquely related to electron density. The authors previously demonstrated that a patch-based method could produce accurate pCTs of the brain using conventional T{sub 1}-weighted MRI scans. The method was driven mainly by local patch similarities and relied on simple affine registrations between an atlas database of the co-registered MRI/CT scan pairsmore » and the MRI scan to be converted. In this study, the authors investigate the applicability of the patch-based approach in the pelvis. This region is challenging for a method based on local similarities due to the greater inter-patient variation. The authors benchmark the method against a baseline pCT strategy where all voxels inside the body contour are assigned a water-equivalent bulk density. Furthermore, the authors implement a parallelized approximate patch search strategy to speed up the pCT generation time to a more clinically relevant level. Methods: The data consisted of CT and T{sub 1}-weighted MRI scans of 10 prostate patients. pCTs were generated using an approximate patch search algorithm in a leave-one-out fashion and compared with the CT using frequently described metrics such as the voxel-wise mean absolute error (MAE{sub vox}) and the deviation in water-equivalent path lengths. Furthermore, the dosimetric accuracy was tested for a volumetric modulated arc therapy plan using dose–volume histogram (DVH) point deviations and γ-index analysis. Results: The patch-based approach had an average MAE{sub vox} of 54 HU; median deviations of less than 0.4% in relevant DVH points and a γ-index pass rate of 0.97 using a 1%/1 mm criterion. The patch-based approach showed a significantly better performance than the baseline water pCT in almost all metrics. The approximate patch search strategy was 70x faster than a brute-force search, with an average prediction time of 20.8 min. Conclusions: The authors showed that a patch-based method based on affine registrations and T{sub 1}-weighted MRI could generate accurate pCTs of the pelvis. The main source of differences between pCT and CT was positional changes of air pockets and body outline.« less

  15. Malaria among gold miners in southern Pará, Brazil: estimates of determinants and individual costs.

    PubMed

    Vosti, S A

    1990-01-01

    As malaria grows more prevalent in the Amazon frontier despite increased expenditures by disease control authorities, national and regional tropical disease control strategies are being called into question. The current crisis involving traditional control/eradication methods has broadened the search for feasible and effective malaria control strategies--a search that necessarily includes an investigation of the roles of a series of individual and community-level socioeconomic characteristics in determining malaria prevalence rates, and the proper methods of estimating these links. In addition, social scientists and policy makers alike know very little about the economic costs associated with malarial infections. In this paper, I use survey data from several Brazilian gold mining areas to (a) test the general reliability of malaria-related questionnaire response data, and suggest categorization methods to minimize the statistical influence of exaggerated responses, (b) estimate three statistical models aimed at detecting the socioeconomic determinants of individual malaria prevalence rates, and (c) calculate estimates of the average cost of a single bout of malaria. The results support the general reliability of survey response data gathered in conjunction with malaria research. Once the effects of vector exposure were controlled for, individual socioeconomic characteristics were only weakly linked to malaria prevalence rates in these very special miners' communities. Moreover, the socioeconomic and exposure links that were significant did not depend on the measure of malaria adopted. Finally, individual costs associated with malarial infections were found to be a significant portion of miners' incomes.

  16. CALL FOR PAPERS: Special issue on the random search problem: trends and perspectives

    NASA Astrophysics Data System (ADS)

    da Luz, Marcos G. E.; Grosberg, Alexander Y.; Raposo, Ernesto P.; Viswanathan, Gandhi M.

    2008-11-01

    This is a call for contributions to a special issue of Journal of Physics A: Mathematical and Theoretical dedicated to the subject of the random search problem. The motivation behind this special issue is to summarize in a single comprehensive publication, the main aspects (past and present), latest developments, different viewpoints and the directions being followed in this multidisciplinary field. We hope that such a special issue could become a particularly valuable reference for the broad scientific community working with the general random search problem. The Editorial Board has invited Marcos G E da Luz, Alexander Y Grosberg, Ernesto P Raposo and Gandhi M Viswanathan to serve as Guest Editors for the special issue. The general question of how to optimize the search for specific target objects in either continuous or discrete environments when the information available is limited is of significant importance in a broad range of fields. Representative examples include ecology (animal foraging, dispersion of populations), geology (oil recovery from mature reservoirs), information theory (automated researchers of registers in high-capacity database), molecular biology (proteins searching for their sites, e.g., on DNA ), etc. One reason underlying the richness of the random search problem relates to the `ignorance' of the locations of the randomly located `targets'. A statistical approach to the search problem can deal adequately with incomplete information and so stochastic strategies become advantageous. The general problem of how to search efficiently for randomly located target sites can thus be quantitatively described using the concepts and methods of statistical physics and stochastic processes. Scope Thus far, to the best of our knowledge, no recent textbook or review article in a physics journal has appeared on this topic. This makes a special issue with review and research articles attractive to those interested in acquiring a general introduction to the field. The subject can be approached from the perspective of different fields: ecology, networks, transport problems, molecular biology, etc. The study of the problem is particularly suited to the concepts and methods of statistical physics and stochastic processes; for example, fractals, random walks, anomalous diffusion. Discrete landscapes can be approached via graph theory, random lattices and complex networks. Such topics are regularly discussed in Journal of Physics A: Mathematical and Theoretical. All such aspects of the problem fall within the scope and focus of this special issue on the random search problem: trends and perspectives. Editorial policy All contributions to the special issue will be refereed in accordance with the refereeing policy of the journal. In particular, all research papers will be expected to be original work reporting substantial new results. The issue will also contain a number of review articles by invitation only. The Guest Editors reserve the right to judge whether a contribution fits the scope of the special issue. Guidelines for preparation of contributions We aim to publish the special issue in August 2009. To realize this, the DEADLINE for contributed papers is 15 January 2009. There is a page limit of 15 printed pages (approximately 9000 words) per contribution. For papers exceeding this limit, the Guest Editors reserve the right to request a reduction in length. Further advice on document preparation can be found at www.iop.org/Journals/jphysa. Contributions to the special issue should if possible be submitted electronically by web upload at www.iop.org/Journals/jphysa, or by email to jphysa@iop.org, quoting 'J. Phys. A Special Issue— Random Search Problem'. Please state whether the paper has been invited or is contributed. Submissions should ideally be in standard LaTeX form. Please see the website for further information on electronic submissions. Authors unable to submit electronically may send hard-copy contributions to: Publishing Administrators, Journal of Physics A, Institute of Physics Publishing, Dirac House, Temple Back, Bristol BS1 6BE, UK, enclosing electronic code on CD if available and quoting 'J. Phys. A Special Issue—Random Search Problem'. All contributions should be accompanied by a read-me file or covering letter giving the postal and e-mail addresses for correspondence. The Publishing Office should be notified of any subsequent change of address. This special issue will be published in the paper and online version of the journal. The corresponding author of each contribution will receive a complimentary copy of the issue.

  17. Applying Hypertext Structures to Software Documentation.

    ERIC Educational Resources Information Center

    French, James C.; And Others

    1997-01-01

    Describes a prototype system for software documentation management called SLEUTH (Software Literacy Enhancing Usefulness to Humans) being developed at the University of Virginia. Highlights include information retrieval techniques, hypertext links that are installed automatically, a WAIS (Wide Area Information Server) search engine, user…

  18. BIBLIO: A Reprint File Management Algorithm

    ERIC Educational Resources Information Center

    Zelnio, Robert N.; And Others

    1977-01-01

    The development of a simple computer algorithm designed for use by the individual educator or researcher in maintaining and searching reprint files is reported. Called BIBLIO, the system is inexpensive and easy to operate and maintain without sacrificing flexibility and utility. (LBH)

  19. Birth Control Pill

    MedlinePlus

    ... Safe Videos for Educators Search English Español Birth Control Pill KidsHealth / For Teens / Birth Control Pill What's in this article? What Is It? ... La píldora anticonceptiva What Is It? The birth control pill (also called "the Pill") is a daily ...

  20. A System for Automatically Generating Scheduling Heuristics

    NASA Technical Reports Server (NTRS)

    Morris, Robert

    1996-01-01

    The goal of this research is to improve the performance of automated schedulers by designing and implementing an algorithm by automatically generating heuristics by selecting a schedule. The particular application selected by applying this method solves the problem of scheduling telescope observations, and is called the Associate Principal Astronomer. The input to the APA scheduler is a set of observation requests submitted by one or more astronomers. Each observation request specifies an observation program as well as scheduling constraints and preferences associated with the program. The scheduler employs greedy heuristic search to synthesize a schedule that satisfies all hard constraints of the domain and achieves a good score with respect to soft constraints expressed as an objective function established by an astronomer-user.

  1. Efficient reordering of PROLOG programs

    NASA Technical Reports Server (NTRS)

    Gooley, Markian M.; Wah, Benjamin W.

    1989-01-01

    PROLOG programs are often inefficient: execution corresponds to a depth-first traversal of an AND/OR graph; traversing subgraphs in another order can be less expensive. It is shown how the reordering of clauses within PROLOG predicates, and especially of goals within clauses, can prevent unnecessary search. The characterization and detection of restrictions on reordering is discussed. A system of calling modes for PROLOG, geared to reordering, is proposed, and ways to infer them automatically are discussed. The information needed for safe reordering is summarized, and which types can be inferred automatically and which must be provided by the user are considered. An improved method for determining a good order for the goals of PROLOG clauses is presented and used as the basis for a reordering system.

  2. Identification of membrane proteome of Paracoccidioides lutzii and its regulation by zinc

    PubMed Central

    de Curcio, Juliana Santana; Silva, Marielle Garcia; Silva Bailão, Mirelle Garcia; Báo, Sônia Nair; Casaletti, Luciana; Bailão, Alexandre Mello; de Almeida Soares, Célia Maria

    2017-01-01

    Aim: During infection development in the host, Paracoccidioides spp. faces the deprivation of micronutrients, a mechanism called nutritional immunity. This condition induces the remodeling of proteins present in different metabolic pathways. Therefore, we attempted to identify membrane proteins and their regulation by zinc in Paracoccidioides lutzii. Materials & methods: Membranes enriched fraction of yeast cells of P. lutzii were isolated, purified and identified by 2D LC–MS/MS detection and database search. Results & conclusion: Zinc deprivation suppressed the expression of membrane proteins such as glycoproteins, those involved in cell wall synthesis and those related to oxidative phosphorylation. This is the first study describing membrane proteins and the effect of zinc deficiency in their regulation in one member of the genus Paracoccidioides. PMID:29134119

  3. Determination of the optimal conditions for inclination maneuvers using a Swing-by

    NASA Astrophysics Data System (ADS)

    Moura, O.; Celestino, C. C.; Prado, A. F. B. A.

    2018-05-01

    The search for methods to reduce the fuel consumption in orbital transfers is something relevant and always current in astrodynamics. Therefore, the maneuvers assisted by the gravity, also called Swing-by maneuvers, can be an advantageous option to save fuel. The proposal of the present research is to explore the influence of some parameters in a Swing-by of an artificial satellite orbiting a planet with one of the moons of this mother planet, with the goal of changing the inclination of the artificial satellite around the main body of the system. The fuel consumption of this maneuver is compared with the required consumption to perform the same change of inclination using the classical approach of impulsive maneuvers.

  4. Early vertical correction of the deep curve of Spee

    PubMed Central

    Martins, Renato Parsekian

    2017-01-01

    ABSTRACT Even though few technological advancements have occurred in Orthodontics recently, the search for more efficient treatments continues. This paper analyses how to accelerate and improve one of the most arduous phases of orthodontic treatment, i.e., correction of the curve of Spee. The leveling of a deep curve of Spee can happen simultaneously with the alignment phase through a method called Early Vertical Correction (EVC). This technique uses two cantilevers affixed to the initial flexible archwire. This paper describes the force system produced by EVC and how to control its side effects. The EVC can reduce treatment time in malocclusions with deep curves of Spee, by combining two phases of the therapy, which clinicians ordinarily pursue sequentially. PMID:28658363

  5. A pluggable framework for parallel pairwise sequence search.

    PubMed

    Archuleta, Jeremy; Feng, Wu-chun; Tilevich, Eli

    2007-01-01

    The current and near future of the computing industry is one of multi-core and multi-processor technology. Most existing sequence-search tools have been designed with a focus on single-core, single-processor systems. This discrepancy between software design and hardware architecture substantially hinders sequence-search performance by not allowing full utilization of the hardware. This paper presents a novel framework that will aid the conversion of serial sequence-search tools into a parallel version that can take full advantage of the available hardware. The framework, which is based on a software architecture called mixin layers with refined roles, enables modules to be plugged into the framework with minimal effort. The inherent modular design improves maintenance and extensibility, thus opening up a plethora of opportunities for advanced algorithmic features to be developed and incorporated while routine maintenance of the codebase persists.

  6. A linguistic geometry for space applications

    NASA Technical Reports Server (NTRS)

    Stilman, Boris

    1994-01-01

    We develop a formal theory, the so-called Linguistic Geometry, in order to discover the inner properties of human expert heuristics, which were successful in a certain class of complex control systems, and apply them to different systems. This research relies on the formalization of search heuristics of high-skilled human experts which allow for the decomposition of complex system into the hierarchy of subsystems, and thus solve intractable problems reducing the search. The hierarchy of subsystems is represented as a hierarchy of formal attribute languages. This paper includes a formal survey of the Linguistic Geometry, and new example of a solution of optimization problem for the space robotic vehicles. This example includes actual generation of the hierarchy of languages, some details of trajectory generation and demonstrates the drastic reduction of search in comparison with conventional search algorithms.

  7. UnCover on the Web: search hints and applications in library environments.

    PubMed

    Galpern, N F; Albert, K M

    1997-01-01

    Among the huge maze of resources available on the Internet, UnCoverWeb stands out as a valuable tool for medical libraries. This up-to-date, free-access, multidisciplinary database of periodical references is searched through an easy-to-learn graphical user interface that is a welcome improvement over the telnet version. This article reviews the basic and advanced search techniques for UnCoverWeb, as well as providing information on the document delivery functions and table of contents alerting service called Reveal. UnCover's currency is evaluated and compared with other current awareness resources. System deficiencies are discussed, with the conclusion that although UnCoverWeb lacks the sophisticated features of many commercial database search services, it is nonetheless a useful addition to the repertoire of information sources available in a library.

  8. Use of a Parabolic Microphone to Detect Hidden Subjects in Search and Rescue.

    PubMed

    Bowditch, Nathaniel L; Searing, Stanley K; Thomas, Jeffrey A; Thompson, Peggy K; Tubis, Jacqueline N; Bowditch, Sylvia P

    2018-03-01

    This study compares a parabolic microphone to unaided hearing in detecting and comprehending hidden callers at ranges of 322 to 2510 m. Eight subjects were placed 322 to 2510 m away from a central listening point. The subjects were concealed, and their calling volume was calibrated. In random order, subjects were asked to call the name of a state for 5 minutes. Listeners with parabolic microphones and others with unaided hearing recorded the direction of the call (detection) and name of the state (comprehension). The parabolic microphone was superior to unaided hearing in both detecting subjects and comprehending their calls, with an effect size (Cohen's d) of 1.58 for detection and 1.55 for comprehension. For each of the 8 hidden subjects, there were 24 detection attempts with the parabolic microphone and 54 to 60 attempts by unaided listeners. At the longer distances (1529-2510 m), the parabolic microphone was better at detecting callers (83% vs 51%; P<0.00001 by χ 2 ) and comprehension (57% vs 12%; P<0.00001). At the shorter distances (322-1190 m), the parabolic microphone offered advantages in detection (100% vs 83%; P=0.000023) and comprehension (86% vs 51%; P<0.00001), although not as pronounced as at the longer distances. Use of a 66-cm (26-inch) parabolic microphone significantly improved detection and comprehension of hidden calling subjects at distances between 322 and 2510 m when compared with unaided hearing. This study supports the use of a parabolic microphone in search and rescue to locate responsive subjects in favorable weather and terrain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Bat mortality and activity at a Northern Iowa wind resource area

    USGS Publications Warehouse

    Jain, A.A.; Koford, Rolf R.; Hancock, A.W.; Zenner, G.G.

    2011-01-01

    We examined bat collision mortality, activity and species composition at an 89-turbine wind resource area in farmland of north-central Iowa from mid-Apr. to mid-Dec., 2003 and mid-Mar. to mid-Dec., 2004. We found 30 bats beneath turbines on cleared ground and gravel access areas in 2003 and 45 bats in 2004. After adjusting for search probability, search efficiency and scavenging rate, we estimated total bat mortality at 396 ?? 72 (95 ci) in 2003 and 636 ?? 112 (95 ci) in 2004. Although carcasses were mostly migratory tree bats, we found a considerable proportion of little brown bats (Myotis lucifugus). We recorded 1465 bat echolocation call files at turbine sites ( 34.88 call files/detector-night) and 1536 bat call files at adjacent non-turbine sites ( 36.57 call files/detector-night). Bat activity did not differ significantly between turbine and non-turbine sites. A large proportion of recorded call files were made by Myotis sp. but this may be because we detected activity at ground level only. There was no relationship between types of turbine lights and either collision mortality or echolocation activity. The highest levels of bat echolocation activity and collision mortality were recorded during Jul. and Aug. during the autumn dispersal and migration period. The fatality rates for bats in general and little brown bats in particular were higher at the Top of Iowa Wind Resource Area than at other, comparable studies in the region. Future efforts to study behavior of bats in flight around turbines as well as cumulative impact studies should not ignore non-tree dwelling bats, generally regarded as minimally affected. ?? 2011, American Midland Naturalist.

  10. Amoeba-Inspired Heuristic Search Dynamics for Exploring Chemical Reaction Paths.

    PubMed

    Aono, Masashi; Wakabayashi, Masamitsu

    2015-09-01

    We propose a nature-inspired model for simulating chemical reactions in a computationally resource-saving manner. The model was developed by extending our previously proposed heuristic search algorithm, called "AmoebaSAT [Aono et al. 2013]," which was inspired by the spatiotemporal dynamics of a single-celled amoeboid organism that exhibits sophisticated computing capabilities in adapting to its environment efficiently [Zhu et al. 2013]. AmoebaSAT is used for solving an NP-complete combinatorial optimization problem [Garey and Johnson 1979], "the satisfiability problem," and finds a constraint-satisfying solution at a speed that is dramatically faster than one of the conventionally known fastest stochastic local search methods [Iwama and Tamaki 2004] for a class of randomly generated problem instances [ http://www.cs.ubc.ca/~hoos/5/benchm.html ]. In cases where the problem has more than one solution, AmoebaSAT exhibits dynamic transition behavior among a variety of the solutions. Inheriting these features of AmoebaSAT, we formulate "AmoebaChem," which explores a variety of metastable molecules in which several constraints determined by input atoms are satisfied and generates dynamic transition processes among the metastable molecules. AmoebaChem and its developed forms will be applied to the study of the origins of life, to discover reaction paths for which expected or unexpected organic compounds may be formed via unknown unstable intermediates and to estimate the likelihood of each of the discovered paths.

  11. Dfam: a database of repetitive DNA based on profile hidden Markov models.

    PubMed

    Wheeler, Travis J; Clements, Jody; Eddy, Sean R; Hubley, Robert; Jones, Thomas A; Jurka, Jerzy; Smit, Arian F A; Finn, Robert D

    2013-01-01

    We present a database of repetitive DNA elements, called Dfam (http://dfam.janelia.org). Many genomes contain a large fraction of repetitive DNA, much of which is made up of remnants of transposable elements (TEs). Accurate annotation of TEs enables research into their biology and can shed light on the evolutionary processes that shape genomes. Identification and masking of TEs can also greatly simplify many downstream genome annotation and sequence analysis tasks. The commonly used TE annotation tools RepeatMasker and Censor depend on sequence homology search tools such as cross_match and BLAST variants, as well as Repbase, a collection of known TE families each represented by a single consensus sequence. Dfam contains entries corresponding to all Repbase TE entries for which instances have been found in the human genome. Each Dfam entry is represented by a profile hidden Markov model, built from alignments generated using RepeatMasker and Repbase. When used in conjunction with the hidden Markov model search tool nhmmer, Dfam produces a 2.9% increase in coverage over consensus sequence search methods on a large human benchmark, while maintaining low false discovery rates, and coverage of the full human genome is 54.5%. The website provides a collection of tools and data views to support improved TE curation and annotation efforts. Dfam is also available for download in flat file format or in the form of MySQL table dumps.

  12. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  13. FRAMING Linguistics: ``SEANCES"(!!!) Martin-Bradshaw-Siegel ``Buzzwordism, Bandwagonism, Sloganeering For:Fun, Profit, Survival, Ego": Rampant UNethics Sociological-DYSfunctionality!!!

    NASA Astrophysics Data System (ADS)

    Bradshaw, John; Siegel, E.

    2010-03-01

    ``Sciences''/SEANCES(!!!) rampant UNethics!!! WITNESS: Yau v Perelman Poincare-conj.-pf. [Naser, NewYorker(8/06)]; digits log- law Siegel[AMS Nat.Mtg.(02)-Abs.973-60-124] inversion to ONLY BEQS: Newcomb(1881)<<

  14. Curiosity Search: Producing Generalists by Encouraging Individuals to Continually Explore and Acquire Skills throughout Their Lifetime.

    PubMed

    Stanton, Christopher; Clune, Jeff

    2016-01-01

    Natural animals are renowned for their ability to acquire a diverse and general skill set over the course of their lifetime. However, research in artificial intelligence has yet to produce agents that acquire all or even most of the available skills in non-trivial environments. One candidate algorithm for encouraging the production of such individuals is Novelty Search, which pressures organisms to exhibit different behaviors from other individuals. However, we hypothesized that Novelty Search would produce sub-populations of specialists, in which each individual possesses a subset of skills, but no one organism acquires all or most of the skills. In this paper, we propose a new algorithm called Curiosity Search, which is designed to produce individuals that acquire as many skills as possible during their lifetime. We show that in a multiple-skill maze environment, Curiosity Search does produce individuals that explore their entire domain, while a traditional implementation of Novelty Search produces specialists. However, we reveal that when modified to encourage intra-life behavioral diversity, Novelty Search can produce organisms that explore almost as much of their environment as Curiosity Search, although Curiosity Search retains a significant performance edge. Finally, we show that Curiosity Search is a useful helper objective when combined with Novelty Search, producing individuals that acquire significantly more skills than either algorithm alone.

  15. Curiosity Search: Producing Generalists by Encouraging Individuals to Continually Explore and Acquire Skills throughout Their Lifetime

    PubMed Central

    Clune, Jeff

    2016-01-01

    Natural animals are renowned for their ability to acquire a diverse and general skill set over the course of their lifetime. However, research in artificial intelligence has yet to produce agents that acquire all or even most of the available skills in non-trivial environments. One candidate algorithm for encouraging the production of such individuals is Novelty Search, which pressures organisms to exhibit different behaviors from other individuals. However, we hypothesized that Novelty Search would produce sub-populations of specialists, in which each individual possesses a subset of skills, but no one organism acquires all or most of the skills. In this paper, we propose a new algorithm called Curiosity Search, which is designed to produce individuals that acquire as many skills as possible during their lifetime. We show that in a multiple-skill maze environment, Curiosity Search does produce individuals that explore their entire domain, while a traditional implementation of Novelty Search produces specialists. However, we reveal that when modified to encourage intra-life behavioral diversity, Novelty Search can produce organisms that explore almost as much of their environment as Curiosity Search, although Curiosity Search retains a significant performance edge. Finally, we show that Curiosity Search is a useful helper objective when combined with Novelty Search, producing individuals that acquire significantly more skills than either algorithm alone. PMID:27589267

  16. A Novel Approach for Lie Detection Based on F-Score and Extreme Learning Machine

    PubMed Central

    Gao, Junfeng; Wang, Zhao; Yang, Yong; Zhang, Wenjia; Tao, Chunyi; Guan, Jinan; Rao, Nini

    2013-01-01

    A new machine learning method referred to as F-score_ELM was proposed to classify the lying and truth-telling using the electroencephalogram (EEG) signals from 28 guilty and innocent subjects. Thirty-one features were extracted from the probe responses from these subjects. Then, a recently-developed classifier called extreme learning machine (ELM) was combined with F-score, a simple but effective feature selection method, to jointly optimize the number of the hidden nodes of ELM and the feature subset by a grid-searching training procedure. The method was compared to two classification models combining principal component analysis with back-propagation network and support vector machine classifiers. We thoroughly assessed the performance of these classification models including the training and testing time, sensitivity and specificity from the training and testing sets, as well as network size. The experimental results showed that the number of the hidden nodes can be effectively optimized by the proposed method. Also, F-score_ELM obtained the best classification accuracy and required the shortest training and testing time. PMID:23755136

  17. AIS-2 radiometry and a comparison of methods for the recovery of ground reflectance

    NASA Technical Reports Server (NTRS)

    Conel, James E.; Green, Robert O.; Vane, Gregg; Bruegge, Carol J.; Alley, Ronald E.; Curtiss, Brian J.

    1987-01-01

    A field experiment and its results involving Airborne Imaging Spectrometer-2 data are described. The radiometry and spectral calibration of the instrument are critically examined in light of laboratory and field measurements. Three methods of compensating for the atmosphere in the search for ground reflectance are compared. It was found that laboratory determined responsitivities are 30 to 50 percent less than expected for conditions of the flight for both short and long wavelength observations. The combined system atmosphere surface signal to noise ratio, as indexed by the mean response divided by the standard deviation for selected areas, lies between 40 and 110, depending upon how scene averages are taken, and is 30 percent less for flight conditions than for laboratory. Atmospheric and surface variations may contribute to this difference. It is not possible to isolate instrument performance from the present data. As for methods of data reduction, the so-called scene average or log-residual method fails to recover any feature present in the surface reflectance, probably because of the extreme homogeneity of the scene.

  18. IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1994-01-01

    IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.

  19. The Value of Molecular vs. Morphometric and Acoustic Information for Species Identification Using Sympatric Molossid Bats

    PubMed Central

    Gager, Yann; Tarland, Emilia; Lieckfeldt, Dietmar; Ménage, Matthieu; Botero-Castro, Fidel; Rossiter, Stephen J.; Kraus, Robert H. S.; Ludwig, Arne; Dechmann, Dina K. N.

    2016-01-01

    A fundamental condition for any work with free-ranging animals is correct species identification. However, in case of bats, information on local species assemblies is frequently limited especially in regions with high biodiversity such as the Neotropics. The bat genus Molossus is a typical example of this, with morphologically similar species often occurring in sympatry. We used a multi-method approach based on molecular, morphometric and acoustic information collected from 962 individuals of Molossus bondae, M. coibensis, and M. molossus captured in Panama. We distinguished M. bondae based on size and pelage coloration. We identified two robust species clusters composed of M. molossus and M. coibensis based on 18 microsatellite markers but also on a more stringently determined set of four markers. Phylogenetic reconstructions using the mitochondrial gene co1 (DNA barcode) were used to diagnose these microsatellite clusters as M. molossus and M. coibensis. To differentiate species, morphological information was only reliable when forearm length and body mass were combined in a linear discriminant function (95.9% correctly identified individuals). When looking in more detail at M. molossus and M. coibensis, only four out of 13 wing parameters were informative for species differentiation, with M. coibensis showing lower values for hand wing area and hand wing length and higher values for wing loading. Acoustic recordings after release required categorization of calls into types, yielding only two informative subsets: approach calls and two-toned search calls. Our data emphasizes the importance of combining morphological traits and independent genetic data to inform the best choice and combination of discriminatory information used in the field. Because parameters can vary geographically, the multi-method approach may need to be adjusted to local species assemblies and populations to be entirely informative. PMID:26943355

  20. CHARACTERIZATION OF MANUFACTURING PROCESSES AND EMISSIONS AND POLLUTION PREVENTION OPTIONS FOR THE COMPOSITE WOOD INDUSTRY

    EPA Science Inventory

    The report summarizes information gathered on emissions from the composite wood industry (also called the Plywood and particleboard industry) and potential pollution prevention options. Information was gathered during a literature search that included trade association publicatio...

  1. Computational Approaches to Identify Promoters and cis-Regulatory Elements in Plant Genomes1

    PubMed Central

    Rombauts, Stephane; Florquin, Kobe; Lescot, Magali; Marchal, Kathleen; Rouzé, Pierre; Van de Peer, Yves

    2003-01-01

    The identification of promoters and their regulatory elements is one of the major challenges in bioinformatics and integrates comparative, structural, and functional genomics. Many different approaches have been developed to detect conserved motifs in a set of genes that are either coregulated or orthologous. However, although recent approaches seem promising, in general, unambiguous identification of regulatory elements is not straightforward. The delineation of promoters is even harder, due to its complex nature, and in silico promoter prediction is still in its infancy. Here, we review the different approaches that have been developed for identifying promoters and their regulatory elements. We discuss the detection of cis-acting regulatory elements using word-counting or probabilistic methods (so-called “search by signal” methods) and the delineation of promoters by considering both sequence content and structural features (“search by content” methods). As an example of search by content, we explored in greater detail the association of promoters with CpG islands. However, due to differences in sequence content, the parameters used to detect CpG islands in humans and other vertebrates cannot be used for plants. Therefore, a preliminary attempt was made to define parameters that could possibly define CpG and CpNpG islands in Arabidopsis, by exploring the compositional landscape around the transcriptional start site. To this end, a data set of more than 5,000 gene sequences was built, including the promoter region, the 5′-untranslated region, and the first introns and coding exons. Preliminary analysis shows that promoter location based on the detection of potential CpG/CpNpG islands in the Arabidopsis genome is not straightforward. Nevertheless, because the landscape of CpG/CpNpG islands differs considerably between promoters and introns on the one side and exons (whether coding or not) on the other, more sophisticated approaches can probably be developed for the successful detection of “putative” CpG and CpNpG islands in plants. PMID:12857799

  2. A cooperative strategy for parameter estimation in large scale systems biology models.

    PubMed

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.

  3. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112

  4. CD-ROM source data uploaded to the operating and storage devices of an IBM 3090 mainframe through a PC terminal.

    PubMed

    Boros, L G; Lepow, C; Ruland, F; Starbuck, V; Jones, S; Flancbaum, L; Townsend, M C

    1992-07-01

    A powerful method of processing MEDLINE and CINAHL source data uploaded to the IBM 3090 mainframe computer through an IBM/PC is described. Data are first downloaded from the CD-ROM's PC devices to floppy disks. These disks then are uploaded to the mainframe computer through an IBM/PC equipped with WordPerfect text editor and computer network connection (SONNGATE). Before downloading, keywords specifying the information to be accessed are typed at the FIND prompt of the CD-ROM station. The resulting abstracts are downloaded into a file called DOWNLOAD.DOC. The floppy disks containing the information are simply carried to an IBM/PC which has a terminal emulation (TELNET) connection to the university-wide computer network (SONNET) at the Ohio State University Academic Computing Services (OSU ACS). The WordPerfect (5.1) processes and saves the text into DOS format. Using the File Transfer Protocol (FTP, 130,000 bytes/s) of SONNET, the entire text containing the information obtained through the MEDLINE and CINAHL search is transferred to the remote mainframe computer for further processing. At this point, abstracts in the specified area are ready for immediate access and multiple retrieval by any PC having network switch or dial-in connection after the USER ID, PASSWORD and ACCOUNT NUMBER are specified by the user. The system provides the user an on-line, very powerful and quick method of searching for words specifying: diseases, agents, experimental methods, animals, authors, and journals in the research area downloaded. The user can also copy the TItles, AUthors and SOurce with optional parts of abstracts into papers under edition. This arrangement serves the special demands of a research laboratory by handling MEDLINE and CINAHL source data resulting after a search is performed with keywords specified for ongoing projects. Since the Ohio State University has a centrally founded mainframe system, the data upload, storage and mainframe operations are free.

  5. Systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a service representative

    DOEpatents

    Harris, Scott H.; Johnson, Joel A.; Neiswanger, Jeffery R.; Twitchell, Kevin E.

    2004-03-09

    The present invention includes systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a customer service representative. In one embodiment of the invention, a system configured to distribute a telephone call within a network includes a distributor adapted to connect with a telephone system, the distributor being configured to connect a telephone call using the telephone system and output the telephone call and associated data of the telephone call; and a plurality of customer service representative terminals connected with the distributor and a selected customer service representative terminal being configured to receive the telephone call and the associated data, the distributor and the selected customer service representative terminal being configured to synchronize, application of the telephone call and associated data from the distributor to the selected customer service representative terminal.

  6. CCSVI-A. A call to clinicans and scientists to vocalise in an Internet age.

    PubMed

    Gafson, Arie R; Giovannoni, Gavin

    2014-03-01

    In 2008, Paulo Zamboni pioneered the 'liberation procedure' for treating multiple sclerosis (MS), claiming that MS is caused by an abnormality of venous drainage which he called chronic cerebrospinal venous insufficiency (CCSVI). CCSVI has been very controversial, both socio-politically and scientifically after going 'viral' via social media. In late 2012, only 56 original scientific research papers had been published on the 'CCSVI syndrome'; however, over 1,150,000 hits on Google existed when searching for the term 'chronic cerebrospinal venous insufficiency' or CCSVI. It is unclear whether the scientific community's response to CCSVI was influenced by Zamboni's original articles, a reactionary response to the 'social phenomenon' of CCSVI or indeed a complex interplay between both these factors. Furthermore, the epidemiology of this 'social phenomenon' remains un-investigated. A PubMed literature search revealed that the greatest level of public interest in CCSVI, as measured by Google Trends, occurred after only 30% of primary articles and 11% of negative studies were submitted for publication. The epicentre of social epidemic has been divided between Italy and Canada. Whilst Canadian scientists had yet to publish a primary article on CCSVI, it had a relative 76% search volume on Google Trends. It is likely that this public interest was sparked by media and political opportunism and fuelled by social media that was disconnected from the scientific community. Our findings call for a concerted effort for clinicians and scientists to engage with the public to ensure that uptake and spread of scientific discoveries via social media are viewed and interpreted in an appropriate context. Examples of how this may be achieved will also be discussed. © 2013 Published by Elsevier B.V.

  7. m2-ABKS: Attribute-Based Multi-Keyword Search over Encrypted Personal Health Records in Multi-Owner Setting.

    PubMed

    Miao, Yinbin; Ma, Jianfeng; Liu, Ximeng; Wei, Fushan; Liu, Zhiquan; Wang, Xu An

    2016-11-01

    Online personal health record (PHR) is more inclined to shift data storage and search operations to cloud server so as to enjoy the elastic resources and lessen computational burden in cloud storage. As multiple patients' data is always stored in the cloud server simultaneously, it is a challenge to guarantee the confidentiality of PHR data and allow data users to search encrypted data in an efficient and privacy-preserving way. To this end, we design a secure cryptographic primitive called as attribute-based multi-keyword search over encrypted personal health records in multi-owner setting to support both fine-grained access control and multi-keyword search via Ciphertext-Policy Attribute-Based Encryption. Formal security analysis proves our scheme is selectively secure against chosen-keyword attack. As a further contribution, we conduct empirical experiments over real-world dataset to show its feasibility and practicality in a broad range of actual scenarios without incurring additional computational burden.

  8. Exploratory power of the harmony search algorithm: analysis and improvements for global numerical optimization.

    PubMed

    Das, Swagatam; Mukhopadhyay, Arpan; Roy, Anwit; Abraham, Ajith; Panigrahi, Bijaya K

    2011-02-01

    The theoretical analysis of evolutionary algorithms is believed to be very important for understanding their internal search mechanism and thus to develop more efficient algorithms. This paper presents a simple mathematical analysis of the explorative search behavior of a recently developed metaheuristic algorithm called harmony search (HS). HS is a derivative-free real parameter optimization algorithm, and it draws inspiration from the musical improvisation process of searching for a perfect state of harmony. This paper analyzes the evolution of the population-variance over successive generations in HS and thereby draws some important conclusions regarding the explorative power of HS. A simple but very useful modification to the classical HS has been proposed in light of the mathematical analysis undertaken here. A comparison with the most recently published variants of HS and four other state-of-the-art optimization algorithms over 15 unconstrained and five constrained benchmark functions reflects the efficiency of the modified HS in terms of final accuracy, convergence speed, and robustness.

  9. TokSearch: A search engine for fusion experimental data

    DOE PAGES

    Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.; ...

    2018-04-01

    At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less

  10. TokSearch: A search engine for fusion experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.

    At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less

  11. Evaluating call-count procedures for measuring local mourning dove populations

    USGS Publications Warehouse

    Armbruster, M.J.; Baskett, T.S.; Goforth, W.R.; Sadler, K.C.

    1978-01-01

    Seventy-nine mourning dove call-count runs were made on a 32-km route in Osage County, Missouri, May 1-August 31, 1971 and 1972. Circular study areas, each 61 ha, surrounding stop numbers 4 and 5, were delineated for intensive nest searches and population estimates. Tallies of cooing male doves along the entire call-count route were quite variable in repeated runs, fluctuating as much as 50 percent on consecutive days. There were no consistent relationships between numbers of cooing males tallied at stops 4 and 5 and the numbers of current nests or doves estimated to be present in the surrounding study areas. We doubt the suitability of call-count procedures to estimate precisely the densities of breeding pairs, nests or production of doves on small areas. Our findings do not dispute the usefulness of the national call-count survey as an index to relative densities of mourning doves during the breeding season over large portions of the United States, or as an index to annual population trends.

  12. The Cost of Surveillance After Urethroplasty

    PubMed Central

    Zaid, Uwais B.; Hawkins, Mitchel; Wilson, Leslie; Ting, Jie; Harris, Catherine; Alwaal, Amjad; Zhao, Lee C.; Morey, Allen F.; Breyer, Benjamin N.

    2015-01-01

    Objectives To determine variability in urethral stricture surveillance. Urethral strictures impact quality of life and exact a large economic burden. Although urethroplasty is the gold standard for durable treatment, strictures recur in 8–18%. There are no universally accepted guidelines for post-urethroplasty surveillance. We performed a literature search to evaluate variability in surveillance protocols, analyzed costs, and reviewed performance of each commonly employed modality. Methods Medline search was performed using the keywords: “urethroplasty,” “urethral stricture,” “stricture recurrence” to ascertain commonly used surveillance strategies for stricture recurrence. We included English language manuscripts from the past 10 years with at least 10 patients, and age greater than 18. Cost data was calculated based on standard 2013 Centers for Medicare and Medicaid Services physician’s fees. Results Surveillance methods included retrograde urethrogram/voiding cystourethrogram (RUG/VCUG), cystourethroscopy, urethral ultrasound, AUA-Symptom Score, and post void residual (PVR) and urine flowmetry (UF) measurement. Most protocols call for a RUG/VCUG at time of catheter removal. Following this, UF/PVR, cystoscopy, urine culture, or a combination of UF and AUA-SS were performed at variable intervals. The first year follow-up cost of anterior urethral surgery ranged from $205 to $1,784. For posterior urethral surgery, follow-up cost for the first year ranged from $404 to $961. Conclusions Practice variability for surveillance of urethral stricture recurrence after urethroplasty leads to significant differences in cost. PMID:25819624

  13. System description: IVY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCune, W.; Shumsky, O.

    2000-02-04

    IVY is a verified theorem prover for first-order logic with equality. It is coded in ACL2, and it makes calls to the theorem prover Otter to search for proofs and to the program MACE to search for countermodels. Verifications of Otter and MACE are not practical because they are coded in C. Instead, Otter and MACE give detailed proofs and models that are checked by verified ACL2 programs. In addition, the initial conversion to clause form is done by verified ACL2 code. The verification is done with respect to finite interpretations.

  14. Tandem Mass Spectrum Sequencing: An Alternative to Database Search Engines in Shotgun Proteomics.

    PubMed

    Muth, Thilo; Rapp, Erdmann; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    Protein identification via database searches has become the gold standard in mass spectrometry based shotgun proteomics. However, as the quality of tandem mass spectra improves, direct mass spectrum sequencing gains interest as a database-independent alternative. In this chapter, the general principle of this so-called de novo sequencing is introduced along with pitfalls and challenges of the technique. The main tools available are presented with a focus on user friendly open source software which can be directly applied in everyday proteomic workflows.

  15. Search for Pentaquarks in the Hadronic Decays of the Z Boson with the DELPHI Detector at LEP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavillet, Ph.

    2006-02-11

    Recent evidence for pentaquark states has been published, in particular for a strange pentaquark {theta}+(1540), for a double-strange state called {xi}(1862)-- and for a charmed state {theta}c(3100)0. Such states should be produced in e+e- annihilations in Z decays. In this paper a search for pentaquarks using the DELPHI detector is described. Preliminary upper limits at 95% C.L. are set on the production rates per Z decay of such particles and their charge-conjugate state.

  16. TESS SpaceX Rollout

    NASA Image and Video Library

    2018-04-15

    The SpaceX Falcon 9 rocket is rolled out to Space Launch Complex 40 at Cape Canaveral Air Force Station in Florida, with NASA's Transiting Exoplanet Survey Satellite (TESS) secured in its payload fairing. TESS will launch on the Falcon 9 no earlier than 6:51 p.m. EDT on April 18. TESS will search for planets outside of our solar system. The mission will find exoplanets that periodically block part of the light from their host stars, events called transits. The satellite will survey the nearest and brightest stars for two years to search for transiting exoplanets.

  17. Blending Old Technology With New: Measuring the Mass of the Electron

    NASA Astrophysics Data System (ADS)

    Brown, Jeremy

    2006-10-01

    One day back in 1974, I was searching through the equipment storage area, where I found these intriguing-looking solenoids, current balances, and electron tubes sitting in the storeroom all covered with dust, obviously unused for some time, calling out to me to be used in some experiment. Eventually I decided to drag out those dusty coils and tubes. I had my students do two experiments from the PSSC Laboratory Manual (1968 edition) called "The Measurement of a Magnetic Field in Fundamental Units," and its companion experiment, "The Mass of the Electron."

  18. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    NASA Astrophysics Data System (ADS)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  19. Surfing for suicide methods and help: content analysis of websites retrieved with search engines in Austria and the United States.

    PubMed

    Till, Benedikt; Niederkrotenthaler, Thomas

    2014-08-01

    The Internet provides a variety of resources for individuals searching for suicide-related information. Structured content-analytic approaches to assess intercultural differences in web contents retrieved with method-related and help-related searches are scarce. We used the 2 most popular search engines (Google and Yahoo/Bing) to retrieve US-American and Austrian search results for the term suicide, method-related search terms (e.g., suicide methods, how to kill yourself, painless suicide, how to hang yourself), and help-related terms (e.g., suicidal thoughts, suicide help) on February 11, 2013. In total, 396 websites retrieved with US search engines and 335 websites from Austrian searches were analyzed with content analysis on the basis of current media guidelines for suicide reporting. We assessed the quality of websites and compared findings across search terms and between the United States and Austria. In both countries, protective outweighed harmful website characteristics by approximately 2:1. Websites retrieved with method-related search terms (e.g., how to hang yourself) contained more harmful (United States: P < .001, Austria: P < .05) and fewer protective characteristics (United States: P < .001, Austria: P < .001) compared to the term suicide. Help-related search terms (e.g., suicidal thoughts) yielded more websites with protective characteristics (United States: P = .07, Austria: P < .01). Websites retrieved with U.S. search engines generally had more protective characteristics (P < .001) than searches with Austrian search engines. Resources with harmful characteristics were better ranked than those with protective characteristics (United States: P < .01, Austria: P < .05). The quality of suicide-related websites obtained depends on the search terms used. Preventive efforts to improve the ranking of preventive web content, particularly regarding method-related search terms, seem necessary. © Copyright 2014 Physicians Postgraduate Press, Inc.

  20. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445

  1. The admixture maximum likelihood test to test for association between rare variants and disease phenotypes.

    PubMed

    Tyrer, Jonathan P; Guo, Qi; Easton, Douglas F; Pharoah, Paul D P

    2013-06-06

    The development of genotyping arrays containing hundreds of thousands of rare variants across the genome and advances in high-throughput sequencing technologies have made feasible empirical genetic association studies to search for rare disease susceptibility alleles. As single variant testing is underpowered to detect associations, the development of statistical methods to combine analysis across variants - so-called "burden tests" - is an area of active research interest. We previously developed a method, the admixture maximum likelihood test, to test multiple, common variants for association with a trait of interest. We have extended this method, called the rare admixture maximum likelihood test (RAML), for the analysis of rare variants. In this paper we compare the performance of RAML with six other burden tests designed to test for association of rare variants. We used simulation testing over a range of scenarios to test the power of RAML compared to the other rare variant association testing methods. These scenarios modelled differences in effect variability, the average direction of effect and the proportion of associated variants. We evaluated the power for all the different scenarios. RAML tended to have the greatest power for most scenarios where the proportion of associated variants was small, whereas SKAT-O performed a little better for the scenarios with a higher proportion of associated variants. The RAML method makes no assumptions about the proportion of variants that are associated with the phenotype of interest or the magnitude and direction of their effect. The method is flexible and can be applied to both dichotomous and quantitative traits and allows for the inclusion of covariates in the underlying regression model. The RAML method performed well compared to the other methods over a wide range of scenarios. Generally power was moderate in most of the scenarios, underlying the need for large sample sizes in any form of association testing.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perl, M.L.

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics atmore » very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references.« less

  3. Comparison of methods for the detection of gravitational waves from unknown neutron stars

    NASA Astrophysics Data System (ADS)

    Walsh, S.; Pitkin, M.; Oliver, M.; D'Antonio, S.; Dergachev, V.; Królak, A.; Astone, P.; Bejger, M.; Di Giovanni, M.; Dorosh, O.; Frasca, S.; Leaci, P.; Mastrogiovanni, S.; Miller, A.; Palomba, C.; Papa, M. A.; Piccinni, O. J.; Riles, K.; Sauter, O.; Sintes, A. M.

    2016-12-01

    Rapidly rotating neutron stars are promising sources of continuous gravitational wave radiation for the LIGO and Virgo interferometers. The majority of neutron stars in our galaxy have not been identified with electromagnetic observations. All-sky searches for isolated neutron stars offer the potential to detect gravitational waves from these unidentified sources. The parameter space of these blind all-sky searches, which also cover a large range of frequencies and frequency derivatives, presents a significant computational challenge. Different methods have been designed to perform these searches within acceptable computational limits. Here we describe the first benchmark in a project to compare the search methods currently available for the detection of unknown isolated neutron stars. The five methods compared here are individually referred to as the PowerFlux, sky Hough, frequency Hough, Einstein@Home, and time domain F -statistic methods. We employ a mock data challenge to compare the ability of each search method to recover signals simulated assuming a standard signal model. We find similar performance among the four quick-look search methods, while the more computationally intensive search method, Einstein@Home, achieves up to a factor of two higher sensitivity. We find that the absence of a second derivative frequency in the search parameter space does not degrade search sensitivity for signals with physically plausible second derivative frequencies. We also report on the parameter estimation accuracy of each search method, and the stability of the sensitivity in frequency and frequency derivative and in the presence of detector noise.

  4. An ontology-based search engine for protein-protein interactions

    PubMed Central

    2010-01-01

    Background Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. Results We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Conclusion Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology. PMID:20122195

  5. An ontology-based search engine for protein-protein interactions.

    PubMed

    Park, Byungkyu; Han, Kyungsook

    2010-01-18

    Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology.

  6. Media Violence: The Search for Solutions.

    ERIC Educational Resources Information Center

    Thoman, Elizabeth

    1995-01-01

    Discusses the influence of mass media depictions of violence on children and provides suggestions for media literacy education. Calls for reducing children's exposure to media violence; changing the impact of violent images; stressing alternatives to violence for resolving conflicts; challenging the social supports for media violence; and…

  7. University of Maryland MRSEC - Education: Professional Development for

    Science.gov Websites

    "stepped" (we call this type of surface a vicinal surface). Modern scanned-probe microscopes International Educational Education Pre-College Programs Homeschool Programs Undergraduate & Graduate Facilities Logos MRSEC Templates Opportunities Search Home » Education » Teacher Programs Professional

  8. Bounding the Resource Availability of Partially Ordered Events with Constant Resource Impact

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy

    2004-01-01

    We compare existing techniques to bound the resource availability of partially ordered events. We first show that, contrary to intuition, two existing techniques, one due to Laborie and one due to Muscettola, are not strictly comparable in terms of the size of the search trees generated under chronological search with a fixed heuristic. We describe a generalization of these techniques called the Flow Balance Constraint to tightly bound the amount of available resource for a set of partially ordered events with piecewise constant resource impact We prove that the new technique generates smaller proof trees under chronological search with a fixed heuristic, at little increase in computational expense. We then show how to construct tighter resource bounds but at increased computational cost.

  9. Social Search: A Taxonomy of, and a User-Centred Approach to, Social Web Search

    ERIC Educational Resources Information Center

    McDonnell, Michael; Shiri, Ali

    2011-01-01

    Purpose: The purpose of this paper is to introduce the notion of social search as a new concept, drawing upon the patterns of web search behaviour. It aims to: define social search; present a taxonomy of social search; and propose a user-centred social search method. Design/methodology/approach: A mixed method approach was adopted to investigate…

  10. OTHER: A multidisciplinary approach to the search for other inhabited worlds

    NASA Astrophysics Data System (ADS)

    Funes, J.; Lares, M.; De los Rios, M.; Martiarena, M.; Ahumada, A. V.

    2017-10-01

    We present project OTHER (Otros mundos, tierra, humanidad, and espacio remoto), a multidisciplinary laboratory of ideas, that addresses questions related to the scientific search for extraterrestrial intelligent life such as: what is life? how did it originate? what might be the criteria that we adopt to identify what we might call an extraterrestrial civilization? As a starting point, we consider the Drake equation which offers a platform from which to address these questions in a multidisciplinary approach. As part of the project OTHER, we propose to develop and explain the last two parameters of the Drake equation that we call the cultural factors: the fraction of intelligent civilizations that want or seek to communicate , and the average life time of the same, . The innovation of the project OTHER is the multidisciplinary approach in the context of the Argentine community. Our goal is to provide new ideas that could offer new perspectives on the old question: Are we alone?

  11. Search for dark matter produced in association with a Higgs boson decaying to two bottom quarks at ATLAS

    NASA Astrophysics Data System (ADS)

    Cheng, Yangyang

    This thesis presents a search for dark matter production in association with a Higgs boson decaying to a pair of bottom quarks, using data from 20.3 fb-1 of proton-proton collisions at a center-of-mass energy of 8 TeV collected by the ATLAS detector at the LHC. The dark matter particles are assumed to be Weakly Interacting Massive Particles, and can be produced in pairs at collider experiments. Events with large missing transverse energy are selected when produced in association with high momentum jets, of which at least two are identified as jets containing b-quarks consistent with those from a Higgs boson decay. To maintain good detector acceptance and selection efficiency of the signal across a wide kinematic range, two methods of Higgs boson reconstruction are used. The Higgs boson is reconstructed either as a pair of small-radius jets both containing b-quarks, called the "resolved'' analysis, or as a single large-radius jet with substructure consistent with a high momentum b b system, called the "boosted'' analysis. The resolved analysis is the focus of this thesis. The observed data are found to be consistent with the expected Standard Model backgrounds. The result from the resolved analysis is interpreted using a simplified model with a Z' gauge boson decaying into different Higgs bosons predicted in a two-Higgs-doublet model, of which the heavy pseudoscalar Higgs decays into a pair of dark matter particles. Exclusion limits are set in regions of parameter space for this model. Model-independent upper limits are also placed on the visible cross-sections for events with a Higgs boson decaying into bb and large missing transverse momentum with thresholds ranging from 150 GeV to 400 GeV.

  12. High-speed data search

    NASA Technical Reports Server (NTRS)

    Driscoll, James N.

    1994-01-01

    The high-speed data search system developed for KSC incorporates existing and emerging information retrieval technology to help a user intelligently and rapidly locate information found in large textual databases. This technology includes: natural language input; statistical ranking of retrieved information; an artificial intelligence concept called semantics, where 'surface level' knowledge found in text is used to improve the ranking of retrieved information; and relevance feedback, where user judgements about viewed information are used to automatically modify the search for further information. Semantics and relevance feedback are features of the system which are not available commercially. The system further demonstrates focus on paragraphs of information to decide relevance; and it can be used (without modification) to intelligently search all kinds of document collections, such as collections of legal documents medical documents, news stories, patents, and so forth. The purpose of this paper is to demonstrate the usefulness of statistical ranking, our semantic improvement, and relevance feedback.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojick, D E; Warnick, W L; Carroll, B C

    With the United States federal government spending billions annually for research and development, ways to increase the productivity of that research can have a significant return on investment. The process by which science knowledge is spread is called diffusion. It is therefore important to better understand and measure the benefits of this diffusion of knowledge. In particular, it is important to understand whether advances in Internet searching can speed up the diffusion of scientific knowledge and accelerate scientific progress despite the fact that the vast majority of scientific information resources continue to be held in deep web databases that manymore » search engines cannot fully access. To address the complexity of the search issue, the term global discovery is used for the act of searching across heterogeneous environments and distant communities. This article discusses these issues and describes research being conducted by the Office of Scientific and Technical Information (OSTI).« less

  14. Through the Google Goggles: Sociopolitical Bias in Search Engine Design

    NASA Astrophysics Data System (ADS)

    Diaz, A.

    Search engines like Google are essential to navigating the Web's endless supply of news, political information, and citizen discourse. The mechanisms and conditions under which search results are selected should therefore be of considerable interest to media scholars, political theorists, and citizens alike. In this chapter, I adopt a "deliberative" ideal for search engines and examine whether Google exhibits the "same old" media biases of mainstreaming, hypercommercialism, and industry consolidation. In the end, serious objections to Google are raised: Google may favor popularity over richness; it provides advertising that competes directly with "editorial" content; it so overwhelmingly dominates the industry that users seldom get a second opinion, and this is unlikely to change. Ultimately, however, the results of this analysis may speak less about Google than about contradictions in the deliberative ideal and the so-called "inherently democratic" nature of the Web.

  15. In Search of Search Engine Marketing Strategy Amongst SME's in Ireland

    NASA Astrophysics Data System (ADS)

    Barry, Chris; Charleton, Debbie

    Researchers have identified the Web as a searchers first port of call for locating information. Search Engine Marketing (SEM) strategies have been noted as a key consideration when developing, maintaining and managing Websites. A study presented here of SEM practices of Irish small to medium enterprises (SMEs) reveals they plan to spend more resources on SEM in the future. Most firms utilize an informal SEM strategy, where Website optimization is perceived most effective in attracting traffic. Respondents cite the use of ‘keywords in title and description tags’ as the most used SEM technique, followed by the use of ‘keywords throughout the whole Website’; while ‘Pay for Placement’ was most widely used Paid Search technique. In concurrence with the literature, measuring SEM performance remains a significant challenge with many firms unsure if they measure it effectively. An encouraging finding is that Irish SMEs adopt a positive ethical posture when undertaking SEM.

  16. Literature search in medical publications.

    PubMed

    Solagberu, Babatunde A

    2002-01-01

    The quality of a medical publication rests as much on the research paper as on the literature search prior to writing for publication. The art of literature search and its importance to the various steps in scientific writing have been emphasised in this paper. Many medical authors in West African sub-region learned the art of publishing research work through their senior professional colleagues or by trial and error through the peer review experience of their work. This article is expected to fill this gap in training. It should guide trainee specialists or new entrants, who must do literature search towards publishing research works for earning promotion, advancing knowledge, obtaining grants and fellowship awards, into the "publish or perish" syndrome existing in academic institutions. The current trend of electronic writing has called for a new style of referencing in medical publications, which has been suggested in this paper.

  17. Privacy preserving index for encrypted electronic medical records.

    PubMed

    Chen, Yu-Chi; Horng, Gwoboa; Lin, Yi-Jheng; Chen, Kuo-Chang

    2013-12-01

    With the development of electronic systems, privacy has become an important security issue in real-life. In medical systems, privacy of patients' electronic medical records (EMRs) must be fully protected. However, to combine the efficiency and privacy, privacy preserving index is introduced to preserve the privacy, where the EMR can be efficiently accessed by this patient or specific doctor. In the literature, Goh first proposed a secure index scheme with keyword search over encrypted data based on a well-known primitive, Bloom filter. In this paper, we propose a new privacy preserving index scheme, called position index (P-index), with keyword search over the encrypted data. The proposed index scheme is semantically secure against the adaptive chosen keyword attack, and it also provides flexible space, lower false positive rate, and search privacy. Moreover, it does not rely on pairing, a complicate computation, and thus can search over encrypted electronic medical records from the cloud server efficiently.

  18. Single-agent parallel window search

    NASA Technical Reports Server (NTRS)

    Powley, Curt; Korf, Richard E.

    1991-01-01

    Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.

  19. Study on Hybrid Image Search Technology Based on Texts and Contents

    NASA Astrophysics Data System (ADS)

    Wang, H. T.; Ma, F. L.; Yan, C.; Pan, H.

    2018-05-01

    Image search was studied first here based on texts and contents, respectively. The text-based image feature extraction was put forward by integrating the statistical and topic features in view of the limitation of extraction of keywords only by means of statistical features of words. On the other hand, a search-by-image method was put forward based on multi-feature fusion in view of the imprecision of the content-based image search by means of a single feature. The layered-searching method depended on primarily the text-based image search method and additionally the content-based image search was then put forward in view of differences between the text-based and content-based methods and their difficult direct fusion. The feasibility and effectiveness of the hybrid search algorithm were experimentally verified.

  20. A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.

    PubMed

    Huang, Shiping; Wu, Zhifeng; Misra, Anil

    2017-12-11

    Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.

  1. Feature selection methods for big data bioinformatics: A survey from the search perspective.

    PubMed

    Wang, Lipo; Wang, Yaoli; Chang, Qing

    2016-12-01

    This paper surveys main principles of feature selection and their recent applications in big data bioinformatics. Instead of the commonly used categorization into filter, wrapper, and embedded approaches to feature selection, we formulate feature selection as a combinatorial optimization or search problem and categorize feature selection methods into exhaustive search, heuristic search, and hybrid methods, where heuristic search methods may further be categorized into those with or without data-distilled feature ranking measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Folksonomical P2P File Sharing Networks Using Vectorized KANSEI Information as Search Tags

    NASA Astrophysics Data System (ADS)

    Ohnishi, Kei; Yoshida, Kaori; Oie, Yuji

    We present the concept of folksonomical peer-to-peer (P2P) file sharing networks that allow participants (peers) to freely assign structured search tags to files. These networks are similar to folksonomies in the present Web from the point of view that users assign search tags to information distributed over a network. As a concrete example, we consider an unstructured P2P network using vectorized Kansei (human sensitivity) information as structured search tags for file search. Vectorized Kansei information as search tags indicates what participants feel about their files and is assigned by the participant to each of their files. A search query also has the same form of search tags and indicates what participants want to feel about files that they will eventually obtain. A method that enables file search using vectorized Kansei information is the Kansei query-forwarding method, which probabilistically propagates a search query to peers that are likely to hold more files having search tags that are similar to the query. The similarity between the search query and the search tags is measured in terms of their dot product. The simulation experiments examine if the Kansei query-forwarding method can provide equal search performance for all peers in a network in which only the Kansei information and the tendency with respect to file collection are different among all of the peers. The simulation results show that the Kansei query forwarding method and a random-walk-based query forwarding method, for comparison, work effectively in different situations and are complementary. Furthermore, the Kansei query forwarding method is shown, through simulations, to be superior to or equal to the random-walk based one in terms of search speed.

  3. Calling louder and longer: how bats use biosonar under severe acoustic interference from other bats

    PubMed Central

    Amichai, Eran; Blumrosen, Gaddi; Yovel, Yossi

    2015-01-01

    Active-sensing systems such as echolocation provide animals with distinct advantages in dark environments. For social animals, however, like many bat species, active sensing can present problems as well: when many individuals emit bio-sonar calls simultaneously, detecting and recognizing the faint echoes generated by one's own calls amid the general cacophony of the group becomes challenging. This problem is often termed ‘jamming’ and bats have been hypothesized to solve it by shifting the spectral content of their calls to decrease the overlap with the jamming signals. We tested bats’ response in situations of extreme interference, mimicking a high density of bats. We played-back bat echolocation calls from multiple speakers, to jam flying Pipistrellus kuhlii bats, simulating a naturally occurring situation of many bats flying in proximity. We examined behavioural and echolocation parameters during search phase and target approach. Under severe interference, bats emitted calls of higher intensity and longer duration, and called more often. Slight spectral shifts were observed but they did not decrease the spectral overlap with jamming signals. We also found that pre-existing inter-individual spectral differences could allow self-call recognition. Results suggest that the bats’ response aimed to increase the signal-to-noise ratio and not to avoid spectral overlap. PMID:26702045

  4. Calling louder and longer: how bats use biosonar under severe acoustic interference from other bats.

    PubMed

    Amichai, Eran; Blumrosen, Gaddi; Yovel, Yossi

    2015-12-22

    Active-sensing systems such as echolocation provide animals with distinct advantages in dark environments. For social animals, however, like many bat species, active sensing can present problems as well: when many individuals emit bio-sonar calls simultaneously, detecting and recognizing the faint echoes generated by one's own calls amid the general cacophony of the group becomes challenging. This problem is often termed 'jamming' and bats have been hypothesized to solve it by shifting the spectral content of their calls to decrease the overlap with the jamming signals. We tested bats' response in situations of extreme interference, mimicking a high density of bats. We played-back bat echolocation calls from multiple speakers, to jam flying Pipistrellus kuhlii bats, simulating a naturally occurring situation of many bats flying in proximity. We examined behavioural and echolocation parameters during search phase and target approach. Under severe interference, bats emitted calls of higher intensity and longer duration, and called more often. Slight spectral shifts were observed but they did not decrease the spectral overlap with jamming signals. We also found that pre-existing inter-individual spectral differences could allow self-call recognition. Results suggest that the bats' response aimed to increase the signal-to-noise ratio and not to avoid spectral overlap. © 2015 The Author(s).

  5. Extremal entanglement witnesses

    NASA Astrophysics Data System (ADS)

    Hansen, Leif Ove; Hauge, Andreas; Myrheim, Jan; Sollid, Per Øyvind

    2015-02-01

    We present a study of extremal entanglement witnesses on a bipartite composite quantum system. We define the cone of witnesses as the dual of the set of separable density matrices, thus TrΩρ≥0 when Ω is a witness and ρ is a pure product state, ρ=ψψ† with ψ=ϕ⊗χ. The set of witnesses of unit trace is a compact convex set, uniquely defined by its extremal points. The expectation value f(ϕ,χ)=TrΩρ as a function of vectors ϕ and χ is a positive semidefinite biquadratic form. Every zero of f(ϕ,χ) imposes strong real-linear constraints on f and Ω. The real and symmetric Hessian matrix at the zero must be positive semidefinite. Its eigenvectors with zero eigenvalue, if such exist, we call Hessian zeros. A zero of f(ϕ,χ) is quadratic if it has no Hessian zeros, otherwise it is quartic. We call a witness quadratic if it has only quadratic zeros, and quartic if it has at least one quartic zero. A main result we prove is that a witness is extremal if and only if no other witness has the same, or a larger, set of zeros and Hessian zeros. A quadratic extremal witness has a minimum number of isolated zeros depending on dimensions. If a witness is not extremal, then the constraints defined by its zeros and Hessian zeros determine all directions in which we may search for witnesses having more zeros or Hessian zeros. A finite number of iterated searches in random directions, by numerical methods, leads to an extremal witness which is nearly always quadratic and has the minimum number of zeros. We discuss briefly some topics related to extremal witnesses, in particular the relation between the facial structures of the dual sets of witnesses and separable states. We discuss the relation between extremality and optimality of witnesses, and a conjecture of separability of the so-called structural physical approximation (SPA) of an optimal witness. Finally, we discuss how to treat the entanglement witnesses on a complex Hilbert space as a subset of the witnesses on a real Hilbert space.

  6. Guiding Conformation Space Search with an All-Atom Energy Potential

    PubMed Central

    Brunette, TJ; Brock, Oliver

    2009-01-01

    The most significant impediment for protein structure prediction is the inadequacy of conformation space search. Conformation space is too large and the energy landscape too rugged for existing search methods to consistently find near-optimal minima. To alleviate this problem, we present model-based search, a novel conformation space search method. Model-based search uses highly accurate information obtained during search to build an approximate, partial model of the energy landscape. Model-based search aggregates information in the model as it progresses, and in turn uses this information to guide exploration towards regions most likely to contain a near-optimal minimum. We validate our method by predicting the structure of 32 proteins, ranging in length from 49 to 213 amino acids. Our results demonstrate that model-based search is more effective at finding low-energy conformations in high-dimensional conformation spaces than existing search methods. The reduction in energy translates into structure predictions of increased accuracy. PMID:18536015

  7. SymDex: increasing the efficiency of chemical fingerprint similarity searches for comparing large chemical libraries by using query set indexing.

    PubMed

    Tai, David; Fang, Jianwen

    2012-08-27

    The large sizes of today's chemical databases require efficient algorithms to perform similarity searches. It can be very time consuming to compare two large chemical databases. This paper seeks to build upon existing research efforts by describing a novel strategy for accelerating existing search algorithms for comparing large chemical collections. The quest for efficiency has focused on developing better indexing algorithms by creating heuristics for searching individual chemical against a chemical library by detecting and eliminating needless similarity calculations. For comparing two chemical collections, these algorithms simply execute searches for each chemical in the query set sequentially. The strategy presented in this paper achieves a speedup upon these algorithms by indexing the set of all query chemicals so redundant calculations that arise in the case of sequential searches are eliminated. We implement this novel algorithm by developing a similarity search program called Symmetric inDexing or SymDex. SymDex shows over a 232% maximum speedup compared to the state-of-the-art single query search algorithm over real data for various fingerprint lengths. Considerable speedup is even seen for batch searches where query set sizes are relatively small compared to typical database sizes. To the best of our knowledge, SymDex is the first search algorithm designed specifically for comparing chemical libraries. It can be adapted to most, if not all, existing indexing algorithms and shows potential for accelerating future similarity search algorithms for comparing chemical databases.

  8. Co-state initialization for the minimum-time low-thrust trajectory optimization

    NASA Astrophysics Data System (ADS)

    Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya

    2017-05-01

    This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.

  9. A Quantum-Based Similarity Method in Virtual Screening.

    PubMed

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2015-10-02

    One of the most widely-used techniques for ligand-based virtual screening is similarity searching. This study adopted the concepts of quantum mechanics to present as state-of-the-art similarity method of molecules inspired from quantum theory. The representation of molecular compounds in mathematical quantum space plays a vital role in the development of quantum-based similarity approach. One of the key concepts of quantum theory is the use of complex numbers. Hence, this study proposed three various techniques to embed and to re-represent the molecular compounds to correspond with complex numbers format. The quantum-based similarity method that developed in this study depending on complex pure Hilbert space of molecules called Standard Quantum-Based (SQB). The recall of retrieved active molecules were at top 1% and top 5%, and significant test is used to evaluate our proposed methods. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiment show that the effectiveness of SQB method was significantly increased due to the role of representational power of molecular compounds in complex numbers forms compared to Tanimoto benchmark similarity measure.

  10. The use of geoscience methods for terrestrial forensic searches

    NASA Astrophysics Data System (ADS)

    Pringle, J. K.; Ruffell, A.; Jervis, J. R.; Donnelly, L.; McKinley, J.; Hansen, J.; Morgan, R.; Pirrie, D.; Harrison, M.

    2012-08-01

    Geoscience methods are increasingly being utilised in criminal, environmental and humanitarian forensic investigations, and the use of such methods is supported by a growing body of experimental and theoretical research. Geoscience search techniques can complement traditional methodologies in the search for buried objects, including clandestine graves, weapons, explosives, drugs, illegal weapons, hazardous waste and vehicles. This paper details recent advances in search and detection methods, with case studies and reviews. Relevant examples are given, together with a generalised workflow for search and suggested detection technique(s) table. Forensic geoscience techniques are continuing to rapidly evolve to assist search investigators to detect hitherto difficult to locate forensic targets.

  11. Requirements for VICTORIA Class Fire Control System: Contact Management Function

    DTIC Science & Technology

    2014-07-01

    Canadian Navy ( RCN ) is currently upgrading the fire control system, which will include moving the software to new modular consoles which have screens...Development RCN Royal Canadian Navy SAC Sensor Analysis Coordinator; also called Command Display Console (CDC) operator SAR Search and Rescue SME

  12. Wildlife Photography - Birds

    NASA Image and Video Library

    2017-05-04

    Common gallinules search for food in a waterway at NASA's Kennedy Space Center in Florida. The center shares a border with the Merritt Island National Wildlife Refuge. More than 330 native and migratory bird species, 25 mammals, 117 fishes and 65 amphibians and reptiles call Kennedy and the wildlife refuge home.

  13. Discovery Systems.

    DTIC Science & Technology

    1986-04-01

    independent regularities - called "concept germs" by Minsky [Min86] and mA 9 "cognitive cliches" by Chapman [ChaB3] - to which are attached batteries of...Dougls B. Lenat. Theory Formation by Heuristic Search. Artificial Intelligence, 21, 1983. [Mns6 Marvin insky. The Society of Mind. Simon and Schuster

  14. University of Maryland MRSEC - Education: Community

    Science.gov Websites

    ; (we call this type of surface a vicinal surface). Modern scanned-probe microscopes, such as the STM Educational Education Pre-College Programs Homeschool Programs Undergraduate & Graduate Programs Teacher MRSEC Templates Opportunities Search Home » Education » Community Outreach Community Outreach

  15. University of Maryland MRSEC - Education: College

    Science.gov Websites

    ; (we call this type of surface a vicinal surface). Modern scanned-probe microscopes, such as the STM Educational Education Pre-College Programs Homeschool Programs Undergraduate & Graduate Programs Teacher MRSEC Templates Opportunities Search Home » Education » Undergraduate/Graduate Programs Undergraduate

  16. Freshman Ethics Course Influences Students' Basic Beliefs.

    ERIC Educational Resources Information Center

    Appleton, James R.; Wong, Frank T.

    1989-01-01

    A freshman class called "Educational Odysseys" that bound together four themes and the concern for ethical living is described. The four themes included: the problem of identity; responses to good and evil; the search for success and surviving failure; and cultural diversity and a liberal education. (MLW)

  17. Animal Foraging and the Evolution of Goal-Directed Cognition

    ERIC Educational Resources Information Center

    Hills, Thomas T.

    2006-01-01

    Foraging-and feeding-related behaviors across eumetazoans share similar molecular mechanisms, suggesting the early evolution of an optimal foraging behavior called area-restricted search (ARS), involving mechanisms of dopamine and glutamate in the modulation of behavioral focus. Similar mechanisms in the vertebrate basal ganglia control motor…

  18. Searching the Heavens: Astronomy, Computation, Statistics, Data Mining and Philosophy

    NASA Astrophysics Data System (ADS)

    Glymour, Clark

    2012-03-01

    Our first and purest science, the mother of scientific methods, sustained by sheer curiosity, searching the heavens we cannot manipulate. From the beginning, astronomy has combined mathematical idealization, technological ingenuity, and indefatigable data collection with procedures to search through assembled data for the processes that govern the cosmos. Astronomers are, and ever have been, data miners, and for that reason astronomical methods (but not astronomical discoveries) have often been despised by statisticians and philosophers. Epithets laced the statistical literature: Ransacking! Data dredging! Double Counting! Statistical disdain was usually directed at social scientists and biologists, rarely if ever at astronomers, but the methodological attitudes and goals that many twentieth-century philosophers and statisticians rejected were creations of the astronomical tradition. The philosophical criticisms were earlier and more direct. In the shadow (or in Alexander Pope’s phrasing, the light) cast on nature in the eighteenth century by the Newtonian triumph, David Hume revived arguments from the ancient Greeks to challenge the very possibility of coming to know what causes what. His conclusion was endorsed in the twentieth century by many philosophers who found talk of causation unnecessary or unacceptably metaphysical, and absorbed by many statisticians as a general suspicion of causal claims, except possibly when they are founded on experimental manipulation. And yet in the hands of a mathematician, Thomas Bayes, and another mathematician and philosopher, Richard Price, Hume’s essays prompted the development of a new kind of statistics, the kind we now call "Bayesian." The computer and new data acquisition methods have begun to dissolve the antipathy between astronomy, philosophy, and statistics. But the resolution is practical, without much reflection on the arguments or the course of events. So, I offer a largely unoriginal history, substituting rather dry commentary on method for the fuller, livelier history of astronomers’ ambitions, politics, and passions. My accounts of various episodes in the astronomical tradition are taken from standard sources, especially Neugebauer (1952), Baum & Sheehan (1997), Crelensten (2006), and Stigler (1990). Methodological commentary is mine, not that of these sources.

  19. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update

    PubMed Central

    Kuleshov, Maxim V.; Jones, Matthew R.; Rouillard, Andrew D.; Fernandez, Nicolas F.; Duan, Qiaonan; Wang, Zichen; Koplev, Simon; Jenkins, Sherry L.; Jagodnik, Kathleen M.; Lachmann, Alexander; McDermott, Michael G.; Monteiro, Caroline D.; Gundersen, Gregory W.; Ma'ayan, Avi

    2016-01-01

    Enrichment analysis is a popular method for analyzing gene sets generated by genome-wide experiments. Here we present a significant update to one of the tools in this domain called Enrichr. Enrichr currently contains a large collection of diverse gene set libraries available for analysis and download. In total, Enrichr currently contains 180 184 annotated gene sets from 102 gene set libraries. New features have been added to Enrichr including the ability to submit fuzzy sets, upload BED files, improved application programming interface and visualization of the results as clustergrams. Overall, Enrichr is a comprehensive resource for curated gene sets and a search engine that accumulates biological knowledge for further biological discoveries. Enrichr is freely available at: http://amp.pharm.mssm.edu/Enrichr. PMID:27141961

  20. Integrating unified medical language system and association mining techniques into relevance feedback for biomedical literature search.

    PubMed

    Ji, Yanqing; Ying, Hao; Tran, John; Dews, Peter; Massanari, R Michael

    2016-07-19

    Finding highly relevant articles from biomedical databases is challenging not only because it is often difficult to accurately express a user's underlying intention through keywords but also because a keyword-based query normally returns a long list of hits with many citations being unwanted by the user. This paper proposes a novel biomedical literature search system, called BiomedSearch, which supports complex queries and relevance feedback. The system employed association mining techniques to build a k-profile representing a user's relevance feedback. More specifically, we developed a weighted interest measure and an association mining algorithm to find the strength of association between a query and each concept in the article(s) selected by the user as feedback. The top concepts were utilized to form a k-profile used for the next-round search. BiomedSearch relies on Unified Medical Language System (UMLS) knowledge sources to map text files to standard biomedical concepts. It was designed to support queries with any levels of complexity. A prototype of BiomedSearch software was made and it was preliminarily evaluated using the Genomics data from TREC (Text Retrieval Conference) 2006 Genomics Track. Initial experiment results indicated that BiomedSearch increased the mean average precision (MAP) for a set of queries. With UMLS and association mining techniques, BiomedSearch can effectively utilize users' relevance feedback to improve the performance of biomedical literature search.

Top