Sample records for framework search method

  1. Molecule database framework: a framework for creating database applications with chemical structure search capability

    PubMed Central

    2013-01-01

    Background Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Results Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes: • Support for multi-component compounds (mixtures) • Import and export of SD-files • Optional security (authorization) For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures). Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. Conclusions By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework. PMID:24325762

  2. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    PubMed

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework.

  3. Iterative repair for scheduling and rescheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Deale, Michael

    1991-01-01

    An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.

  4. Fit of screw-retained fixed implant frameworks fabricated by different methods: a systematic review.

    PubMed

    Abduo, Jaafar; Lyons, Karl; Bennani, Vincent; Waddell, Neil; Swain, Michael

    2011-01-01

    The aim of this study was to review the published literature investigating the accuracy of fit of fixed implant frameworks fabricated using different materials and methods. A comprehensive electronic search was performed through PubMed (MEDLINE) using Boolean operators to combine key words. The search was limited to articles written in English and published through May 2010. In addition, a manual search through articles and reference lists retrieved from the electronic search and peer-reviewed journals was also conducted. A total of 248 articles were retrieved, and 26 met the specified inclusion criteria for the review. The selected articles assessed the fit of fixed implant frameworks fabricated by different techniques. The investigated fabrication approaches were one-piece casting, sectioning and reconnection, spark erosion with an electric discharge machine, computer-aided design/computer-assisted manufacturing (CAD/CAM), and framework bonding to prefabricated abutment cylinders. Cast noble metal frameworks have a predictable fit, and additional fit refinement treatment is not indicated in well-controlled conditions. Base metal castings do not provide a satisfactory level of fit unless additional refinement treatment is performed, such as sectioning and laser welding or spark erosion. Spark erosion, framework bonding to prefabricated abutment cylinders, and CAD/CAM have the potential to provide implant frameworks with an excellent fit; CAD/CAM is the most consistent and least technique-sensitive of these methods.

  5. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  6. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    NASA Astrophysics Data System (ADS)

    Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.

    2013-05-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.

  7. Methods used to address fidelity of receipt in health intervention research: a citation analysis and systematic review.

    PubMed

    Rixon, Lorna; Baron, Justine; McGale, Nadine; Lorencatto, Fabiana; Francis, Jill; Davies, Anna

    2016-11-18

    The American Behaviour Change Consortium (BCC) framework acknowledges patients as active participants and supports the need to investigate the fidelity with which they receive interventions, i.e. receipt. According to this framework, addressing receipt consists in using strategies to assess or enhance participants' understanding and/or performance of intervention skills. This systematic review aims to establish the frequency with which receipt is addressed as defined in the BCC framework in health research, and to describe the methods used in papers informed by the BCC framework and in the wider literature. A forward citation search on papers presenting the BCC framework was performed to determine the frequency with which receipt as defined in this framework was addressed. A second electronic database search, including search terms pertaining to fidelity, receipt, health and process evaluations was performed to identify papers reporting on receipt in the wider literature and irrespective of the framework used. These results were combined with forward citation search results to review methods to assess receipt. Eligibility criteria and data extraction forms were developed and applied to papers. Results are described in a narrative synthesis. 19.6% of 33 studies identified from the forward citation search to report on fidelity were found to address receipt. In 60.6% of these, receipt was assessed in relation to understanding and in 42.4% in relation to performance of skill. Strategies to enhance these were present in 12.1% and 21.1% of studies, respectively. Fifty-five studies were included in the review of the wider literature. Several frameworks and operationalisations of receipt were reported, but the latter were not always consistent with the guiding framework. Receipt was most frequently operationalised in relation to intervention content (16.4%), satisfaction (14.5%), engagement (14.5%), and attendance (14.5%). The majority of studies (90.0%) included subjective assessments of receipt. These relied on quantitative (76.0%) rather than qualitative (42.0%) methods and studies collected data on intervention recipients (50.0%), intervention deliverers (28.0%), or both (22.0%). Few studies (26.0%) reported on the reliability or validity of methods used. Receipt is infrequently addressed in health research and improvements to methods of assessment and reporting are required.

  8. Global OpenSearch

    NASA Astrophysics Data System (ADS)

    Newman, D. J.; Mitchell, A. E.

    2015-12-01

    At AGU 2014, NASA EOSDIS demonstrated a case-study of an OpenSearch framework for Earth science data discovery. That framework leverages the IDN and CWIC OpenSearch API implementations to provide seamless discovery of data through the 'two-step' discovery process as outlined by the Federation for Earth Sciences (ESIP) OpenSearch Best Practices. But how would an Earth Scientist leverage this framework and what are the benefits? Using a client that understands the OpenSearch specification and, for further clarity, the various best practices and extensions, a scientist can discovery a plethora of data not normally accessible either by traditional methods (NASA Earth Data Search, Reverb, etc) or direct methods (going to the source of the data) We will demonstrate, via the CWICSmart web client, how an earth scientist can access regional data on a regional phenomena in a uniform and aggregated manner. We will demonstrate how an earth scientist can 'globalize' their discovery. You want to find local data on 'sea surface temperature of the Indian Ocean'? We can help you with that. 'European meteorological data'? Yes. 'Brazilian rainforest satellite imagery'? That too. CWIC allows you to get earth science data in a uniform fashion from a large number of disparate, world-wide agencies. This is what we mean by Global OpenSearch.

  9. Hybrid Filtering in Semantic Query Processing

    ERIC Educational Resources Information Center

    Jeong, Hanjo

    2011-01-01

    This dissertation presents a hybrid filtering method and a case-based reasoning framework for enhancing the effectiveness of Web search. Web search may not reflect user needs, intent, context, and preferences, because today's keyword-based search is lacking semantic information to capture the user's context and intent in posing the search query.…

  10. ENVIRONMENTAL IMPACT ASSESSMENT OF A HEALTH TECHNOLOGY: A SCOPING REVIEW.

    PubMed

    Polisena, Julie; De Angelis, Gino; Kaunelis, David; Gutierrez-Ibarluzea, Iñaki

    2018-06-13

    The Health Technology Expert Review Panel is an advisory body to Canadian Agency for Drugs and Technologies in Health (CADTH) that develops recommendations on health technology assessments (HTAs) for nondrug health technologies using a deliberative framework. The framework spans several domains, including the environmental impact of the health technology(ies). Our research objective was to identify articles on frameworks, methods or case studies on the environmental impact assessment of health technologies. A literature search in major databases and a focused gray literature search were conducted. The main search concepts were HTA and environmental impact/sustainability. Eligible articles were those that described a conceptual framework or methods used to conduct an environmental assessment of health technologies, and case studies on the application of an environmental assessment. From the 1,710 citations identified, thirteen publications were included. Two articles presented a framework to incorporate environmental assessment in HTAs. Other approaches described weight of evidence practices and comprehensive and integrated environmental impact assessments. Central themes derived include transparency and repeatability, integration of components in a framework or of evidence into a single outcome, data availability to ensure the accuracy of findings, and familiarity with the approach used. Each framework and methods presented have different foci related to the ecosystem, health economics, or engineering practices. Their descriptions suggested transparency, repeatability, and the integration of components or of evidence into a single outcome as their main strengths. Our review is an initial step of a larger initiative by CADTH to develop the methods and processes to address the environmental impact question in an HTA.

  11. BioEve Search: A Novel Framework to Facilitate Interactive Literature Search

    PubMed Central

    Ahmed, Syed Toufeeq; Davulcu, Hasan; Tikves, Sukru; Nair, Radhika; Zhao, Zhongming

    2012-01-01

    Background. Recent advances in computational and biological methods in last two decades have remarkably changed the scale of biomedical research and with it began the unprecedented growth in both the production of biomedical data and amount of published literature discussing it. An automated extraction system coupled with a cognitive search and navigation service over these document collections would not only save time and effort, but also pave the way to discover hitherto unknown information implicitly conveyed in the texts. Results. We developed a novel framework (named “BioEve”) that seamlessly integrates Faceted Search (Information Retrieval) with Information Extraction module to provide an interactive search experience for the researchers in life sciences. It enables guided step-by-step search query refinement, by suggesting concepts and entities (like genes, drugs, and diseases) to quickly filter and modify search direction, and thereby facilitating an enriched paradigm where user can discover related concepts and keywords to search while information seeking. Conclusions. The BioEve Search framework makes it easier to enable scalable interactive search over large collection of textual articles and to discover knowledge hidden in thousands of biomedical literature articles with ease. PMID:22693501

  12. Searching for an Axis-Parallel Shoreline

    NASA Astrophysics Data System (ADS)

    Langetepe, Elmar

    We are searching for an unknown horizontal or vertical line in the plane under the competitive framework. We design a framework for lower bounds on all cyclic and monotone strategies that result in two-sequence functionals. For optimizing such functionals we apply a method that combines two main paradigms. The given solution shows that the combination method is of general interest. Finally, we obtain the current best strategy and can prove that this is the best strategy among all cyclic and monotone strategies which is a main step toward a lower bound construction.

  13. Introducing PALETTE: an iterative method for conducting a literature search for a review in palliative care.

    PubMed

    Zwakman, Marieke; Verberne, Lisa M; Kars, Marijke C; Hooft, Lotty; van Delden, Johannes J M; Spijker, René

    2018-06-02

    In the rapidly developing specialty of palliative care, literature reviews have become increasingly important to inform and improve the field. When applying widely used methods for literature reviews developed for intervention studies onto palliative care, challenges are encountered such as the heterogeneity of palliative care in practice (wide range of domains in patient characteristics, stages of illness and stakeholders), the explorative character of review questions, and the poorly defined keywords and concepts. To overcome the challenges and to provide guidance for researchers to conduct a literature search for a review in palliative care, Palliative cAre Literature rEview iTeraTive mEthod (PALLETE), a pragmatic framework, was developed. We assessed PALETTE with a detailed description. PALETTE consists of four phases; developing the review question, building the search strategy, validating the search strategy and performing the search. The framework incorporates different information retrieval techniques: contacting experts, pearl growing, citation tracking and Boolean searching in a transparent way to maximize the retrieval of literature relevant to the topic of interest. The different components and techniques are repeated until no new articles are qualified for inclusion. The phases within PALETTE are interconnected by a recurrent process of validation on 'golden bullets' (articles that undoubtedly should be part of the review), citation tracking and concept terminology reflecting the review question. To give insight in the value of PALETTE, we compared PALETTE with the recommended search method for reviews of intervention studies. By using PALETTE on two palliative care literature reviews, we were able to improve our review questions and search strategies. Moreover, in comparison with the recommended search for intervention reviews, the number of articles needed to be screened was decreased whereas more relevant articles were retrieved. Overall, PALETTE helped us in gaining a thorough understanding of the topic of interest and made us confident that the included studies comprehensively represented the topic. PALETTE is a coherent and transparent pragmatic framework to overcome the challenges of performing a literature review in palliative care. The method enables researchers to improve question development and to maximise both sensitivity and precision in their search process.

  14. Understanding integrated care: a comprehensive conceptual framework based on the integrative functions of primary care

    PubMed Central

    Valentijn, Pim P.; Schepman, Sanneke M.; Opheij, Wilfrid; Bruijnzeels, Marc A.

    2013-01-01

    Introduction Primary care has a central role in integrating care within a health system. However, conceptual ambiguity regarding integrated care hampers a systematic understanding. This paper proposes a conceptual framework that combines the concepts of primary care and integrated care, in order to understand the complexity of integrated care. Methods The search method involved a combination of electronic database searches, hand searches of reference lists (snowball method) and contacting researchers in the field. The process of synthesizing the literature was iterative, to relate the concepts of primary care and integrated care. First, we identified the general principles of primary care and integrated care. Second, we connected the dimensions of integrated care and the principles of primary care. Finally, to improve content validity we held several meetings with researchers in the field to develop and refine our conceptual framework. Results The conceptual framework combines the functions of primary care with the dimensions of integrated care. Person-focused and population-based care serve as guiding principles for achieving integration across the care continuum. Integration plays complementary roles on the micro (clinical integration), meso (professional and organisational integration) and macro (system integration) level. Functional and normative integration ensure connectivity between the levels. Discussion The presented conceptual framework is a first step to achieve a better understanding of the inter-relationships among the dimensions of integrated care from a primary care perspective. PMID:23687482

  15. A Solution Framework for Environmental Characterization Problems

    EPA Science Inventory

    This paper describes experiences developing a grid-enabled framework for solving environmental inverse problems. The solution approach taken here couples environmental simulation models with global search methods and requires readily available computational resources of the grid ...

  16. Environmental Visualization and Horizontal Fusion

    DTIC Science & Technology

    2005-10-01

    the section on EVIS Rules. Federated Search – Discovering Content Another method of discovering services and their content has been implemented...in HF through a next-generation knowledge discovery framework called Federated Search . A virtual information space, called Collateral Space was...environmental mission effects products, is presented later in the paper. Federated Search allows users to search through Collateral Space data that is

  17. A methodological survey identified eight proposed frameworks for the adaptation of health related guidelines.

    PubMed

    Darzi, Andrea; Abou-Jaoude, Elias A; Agarwal, Arnav; Lakis, Chantal; Wiercioch, Wojtek; Santesso, Nancy; Brax, Hneine; El-Jardali, Fadi; Schünemann, Holger J; Akl, Elie A

    2017-06-01

    Our objective was to identify and describe published frameworks for adaptation of clinical, public health, and health services guidelines. We included reports describing methods of adaptation of guidelines in sufficient detail to allow its reproducibility. We searched Medline and EMBASE databases. We also searched personal files, as well manuals and handbooks of organizations and professional societies that proposed methods of adaptation and adoption of guidelines. We followed standard systematic review methodology. Our search captured 12,021 citations, out of which we identified eight proposed methods of guidelines adaptation: ADAPTE, Adapted ADAPTE, Alberta Ambassador Program adaptation phase, GRADE-ADOLOPMENT, MAGIC, RAPADAPTE, Royal College of Nursing (RCN), and Systematic Guideline Review (SGR). The ADAPTE framework consists of a 24-step process to adapt guidelines to a local context taking into consideration the needs, priorities, legislation, policies, and resources. The Alexandria Center for Evidence-Based Clinical Practice Guidelines updated one of ADAPTE's tools, modified three tools, and added three new ones. In addition, they proposed optionally using three other tools. The Alberta Ambassador Program adaptation phase consists of 11 steps and focused on adapting good-quality guidelines for nonspecific low back pain into local context. GRADE-ADOLOPMENT is an eight-step process based on the GRADE Working Group's Evidence to Decision frameworks and applied in 22 guidelines in the context of national guideline development program. The MAGIC research program developed a five-step adaptation process, informed by ADAPTE and the GRADE approach in the context of adapting thrombosis guidelines. The RAPADAPTE framework consists of 12 steps based on ADAPTE and using synthesized evidence databases, retrospectively derived from the experience of producing a high-quality guideline for the treatment of breast cancer with limited resources in Costa Rica. The RCN outlines five key steps strategy for adaptation of guidelines to the local context. The SGR method consists of nine steps and takes into consideration both methodological gaps and context-specific normative issues in source guidelines. We identified through searching personal files two abandoned methods. We identified and described eight proposed frameworks for the adaptation of health-related guidelines. There is a need to evaluate these different frameworks to assess rigor, efficiency, and transparency of their proposed processes. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A Framework for Integrating Oceanographic Data Repositories

    NASA Astrophysics Data System (ADS)

    Rozell, E.; Maffei, A. R.; Beaulieu, S. E.; Fox, P. A.

    2010-12-01

    Oceanographic research covers a broad range of science domains and requires a tremendous amount of cross-disciplinary collaboration. Advances in cyberinfrastructure are making it easier to share data across disciplines through the use of web services and community vocabularies. Best practices in the design of web services and vocabularies to support interoperability amongst science data repositories are only starting to emerge. Strategic design decisions in these areas are crucial to the creation of end-user data and application integration tools. We present S2S, a novel framework for deploying customizable user interfaces to support the search and analysis of data from multiple repositories. Our research methods follow the Semantic Web methodology and technology development process developed by Fox et al. This methodology stresses the importance of close scientist-technologist interactions when developing scientific use cases, keeping the project well scoped and ensuring the result meets a real scientific need. The S2S framework motivates the development of standardized web services with well-described parameters, as well as the integration of existing web services and applications in the search and analysis of data. S2S also encourages the use and development of community vocabularies and ontologies to support federated search and reduce the amount of domain expertise required in the data discovery process. S2S utilizes the Web Ontology Language (OWL) to describe the components of the framework, including web service parameters, and OpenSearch as a standard description for web services, particularly search services for oceanographic data repositories. We have created search services for an oceanographic metadata database, a large set of quality-controlled ocean profile measurements, and a biogeographic search service. S2S provides an application programming interface (API) that can be used to generate custom user interfaces, supporting data and application integration across these repositories and other web resources. Although initially targeted towards a general oceanographic audience, the S2S framework shows promise in many science domains, inspired in part by the broad disciplinary coverage of oceanography. This presentation will cover the challenges addressed by the S2S framework, the research methods used in its development, and the resulting architecture for the system. It will demonstrate how S2S is remarkably extensible, and can be generalized to many science domains. Given these characteristics, the framework can simplify the process of data discovery and analysis for the end user, and can help to shift the responsibility of search interface development away from data managers.

  19. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic

    PubMed Central

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364

  20. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.

    PubMed

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.

  1. A Modular Simulation Framework for Assessing Swarm Search Models

    DTIC Science & Technology

    2014-09-01

    SUBTITLE A MODULAR SIMULATION FRAMEWORK FOR ASSESSING SWARM SEARCH MODELS 5. FUNDING NUMBERS 6. AUTHOR(S) Blake M. Wanier 7. PERFORMING ORGANIZATION...Numerical studies demonstrate the ability to leverage the developed simulation and analysis framework to investigate three canonical swarm search models ...as benchmarks for future exploration of more sophisticated swarm search scenarios. 14. SUBJECT TERMS Swarm Search, Search Theory, Modeling Framework

  2. Engaging Elderly People in Telemedicine Through Gamification

    PubMed Central

    Tabak, Monique; Dekker - van Weering, Marit; Vollenbroek-Hutten, Miriam

    2015-01-01

    Background Telemedicine can alleviate the increasing demand for elderly care caused by the rapidly aging population. However, user adherence to technology in telemedicine interventions is low and decreases over time. Therefore, there is a need for methods to increase adherence, specifically of the elderly user. A strategy that has recently emerged to address this problem is gamification. It is the application of game elements to nongame fields to motivate and increase user activity and retention. Objective This research aims to (1) provide an overview of existing theoretical frameworks for gamification and explore methods that specifically target the elderly user and (2) explore user classification theories for tailoring game content to the elderly user. This knowledge will provide a foundation for creating a new framework for applying gamification in telemedicine applications to effectively engage the elderly user by increasing and maintaining adherence. Methods We performed a broad Internet search using scientific and nonscientific search engines and included information that described either of the following subjects: the conceptualization of gamification, methods to engage elderly users through gamification, or user classification theories for tailored game content. Results Our search showed two main approaches concerning frameworks for gamification: from business practices, which mostly aim for more revenue, emerge an applied approach, while academia frameworks are developed incorporating theories on motivation while often aiming for lasting engagement. The search provided limited information regarding the application of gamification to engage elderly users, and a significant gap in knowledge on the effectiveness of a gamified application in practice. Several approaches for classifying users in general were found, based on archetypes and reasons to play, and we present them along with their corresponding taxonomies. The overview we created indicates great connectivity between these taxonomies. Conclusions Gamification frameworks have been developed from different backgrounds—business and academia—but rarely target the elderly user. The effectiveness of user classifications for tailored game content in this context is not yet known. As a next step, we propose the development of a framework based on the hypothesized existence of a relation between preference for game content and personality. PMID:26685287

  3. Standardized mappings--a framework to combine different semantic mappers into a standardized web-API.

    PubMed

    Neuhaus, Philipp; Doods, Justin; Dugas, Martin

    2015-01-01

    Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.

  4. Understanding integrated care: a comprehensive conceptual framework based on the integrative functions of primary care.

    PubMed

    Valentijn, Pim P; Schepman, Sanneke M; Opheij, Wilfrid; Bruijnzeels, Marc A

    2013-01-01

    Primary care has a central role in integrating care within a health system. However, conceptual ambiguity regarding integrated care hampers a systematic understanding. This paper proposes a conceptual framework that combines the concepts of primary care and integrated care, in order to understand the complexity of integrated care. The search method involved a combination of electronic database searches, hand searches of reference lists (snowball method) and contacting researchers in the field. The process of synthesizing the literature was iterative, to relate the concepts of primary care and integrated care. First, we identified the general principles of primary care and integrated care. Second, we connected the dimensions of integrated care and the principles of primary care. Finally, to improve content validity we held several meetings with researchers in the field to develop and refine our conceptual framework. The conceptual framework combines the functions of primary care with the dimensions of integrated care. Person-focused and population-based care serve as guiding principles for achieving integration across the care continuum. Integration plays complementary roles on the micro (clinical integration), meso (professional and organisational integration) and macro (system integration) level. Functional and normative integration ensure connectivity between the levels. The presented conceptual framework is a first step to achieve a better understanding of the inter-relationships among the dimensions of integrated care from a primary care perspective.

  5. Optimal directed searches for continuous gravitational waves

    NASA Astrophysics Data System (ADS)

    Ming, Jing; Krishnan, Badri; Papa, Maria Alessandra; Aulbert, Carsten; Fehrmann, Henning

    2016-03-01

    Wide parameter space searches for long-lived continuous gravitational wave signals are computationally limited. It is therefore critically important that the available computational resources are used rationally. In this paper we consider directed searches, i.e., targets for which the sky position is known accurately but the frequency and spin-down parameters are completely unknown. Given a list of such potential astrophysical targets, we therefore need to prioritize. On which target(s) should we spend scarce computing resources? What parameter space region in frequency and spin-down should we search through? Finally, what is the optimal search setup that we should use? In this paper we present a general framework that allows us to solve all three of these problems. This framework is based on maximizing the probability of making a detection subject to a constraint on the maximum available computational cost. We illustrate the method for a simplified problem.

  6. SPIKE: AI scheduling techniques for Hubble Space Telescope

    NASA Astrophysics Data System (ADS)

    Johnston, Mark D.

    1991-09-01

    AI (Artificial Intelligence) scheduling techniques for HST are presented in the form of the viewgraphs. The following subject areas are covered: domain; HST constraint timescales; HTS scheduling; SPIKE overview; SPIKE architecture; constraint representation and reasoning; use of suitability functions by scheduling agent; SPIKE screen example; advantages of suitability function framework; limiting search and constraint propagation; scheduling search; stochastic search; repair methods; implementation; and status.

  7. Charter Schools and the Teacher Job Search

    ERIC Educational Resources Information Center

    Cannata, Marisa

    2011-01-01

    This article examines the position of charter schools in prospective elementary teachers' job search decisions. Using a labor market segmentation framework, it explores teacher applicants' decisions to apply to charter schools. The data come from a mixed-methods longitudinal study of prospective teachers looking for their first job. This article…

  8. ISART: A Generic Framework for Searching Books with Social Information

    PubMed Central

    Cui, Xiao-Ping; Qu, Jiao; Geng, Bin; Zhou, Fang; Song, Li; Hao, Hong-Wei

    2016-01-01

    Effective book search has been discussed for decades and is still future-proof in areas as diverse as computer science, informatics, e-commerce and even culture and arts. A variety of social information contents (e.g, ratings, tags and reviews) emerge with the huge number of books on the Web, but how they are utilized for searching and finding books is seldom investigated. Here we develop an Integrated Search And Recommendation Technology (IsArt), which breaks new ground by providing a generic framework for searching books with rich social information. IsArt comprises a search engine to rank books with book contents and professional metadata, a Generalized Content-based Filtering model to thereafter rerank books with user-generated social contents, and a learning-to-rank technique to finally combine a wide range of diverse reranking results. Experiments show that this technology permits embedding social information to promote book search effectiveness, and IsArt, by making use of it, has the best performance on CLEF/INEX Social Book Search Evaluation datasets of all 4 years (from 2011 to 2014), compared with some other state-of-the-art methods. PMID:26863545

  9. ISART: A Generic Framework for Searching Books with Social Information.

    PubMed

    Yin, Xu-Cheng; Zhang, Bo-Wen; Cui, Xiao-Ping; Qu, Jiao; Geng, Bin; Zhou, Fang; Song, Li; Hao, Hong-Wei

    2016-01-01

    Effective book search has been discussed for decades and is still future-proof in areas as diverse as computer science, informatics, e-commerce and even culture and arts. A variety of social information contents (e.g, ratings, tags and reviews) emerge with the huge number of books on the Web, but how they are utilized for searching and finding books is seldom investigated. Here we develop an Integrated Search And Recommendation Technology (IsArt), which breaks new ground by providing a generic framework for searching books with rich social information. IsArt comprises a search engine to rank books with book contents and professional metadata, a Generalized Content-based Filtering model to thereafter rerank books with user-generated social contents, and a learning-to-rank technique to finally combine a wide range of diverse reranking results. Experiments show that this technology permits embedding social information to promote book search effectiveness, and IsArt, by making use of it, has the best performance on CLEF/INEX Social Book Search Evaluation datasets of all 4 years (from 2011 to 2014), compared with some other state-of-the-art methods.

  10. Development of a Search Strategy for an Evidence Based Retrieval Service

    PubMed Central

    Ho, Gah Juan; Liew, Su May; Ng, Chirk Jenn; Hisham Shunmugam, Ranita; Glasziou, Paul

    2016-01-01

    Background Physicians are often encouraged to locate answers for their clinical queries via an evidence-based literature search approach. The methods used are often not clearly specified. Inappropriate search strategies, time constraint and contradictory information complicate evidence retrieval. Aims Our study aimed to develop a search strategy to answer clinical queries among physicians in a primary care setting Methods Six clinical questions of different medical conditions seen in primary care were formulated. A series of experimental searches to answer each question was conducted on 3 commonly advocated medical databases. We compared search results from a PICO (patients, intervention, comparison, outcome) framework for questions using different combinations of PICO elements. We also compared outcomes from doing searches using text words, Medical Subject Headings (MeSH), or a combination of both. All searches were documented using screenshots and saved search strategies. Results Answers to all 6 questions using the PICO framework were found. A higher number of systematic reviews were obtained using a 2 PICO element search compared to a 4 element search. A more optimal choice of search is a combination of both text words and MeSH terms. Despite searching using the Systematic Review filter, many non-systematic reviews or narrative reviews were found in PubMed. There was poor overlap between outcomes of searches using different databases. The duration of search and screening for the 6 questions ranged from 1 to 4 hours. Conclusion This strategy has been shown to be feasible and can provide evidence to doctors’ clinical questions. It has the potential to be incorporated into an interventional study to determine the impact of an online evidence retrieval system. PMID:27935993

  11. Engaging Elderly People in Telemedicine Through Gamification.

    PubMed

    de Vette, Frederiek; Tabak, Monique; Dekker-van Weering, Marit; Vollenbroek-Hutten, Miriam

    2015-12-18

    Telemedicine can alleviate the increasing demand for elderly care caused by the rapidly aging population. However, user adherence to technology in telemedicine interventions is low and decreases over time. Therefore, there is a need for methods to increase adherence, specifically of the elderly user. A strategy that has recently emerged to address this problem is gamification. It is the application of game elements to nongame fields to motivate and increase user activity and retention. This research aims to (1) provide an overview of existing theoretical frameworks for gamification and explore methods that specifically target the elderly user and (2) explore user classification theories for tailoring game content to the elderly user. This knowledge will provide a foundation for creating a new framework for applying gamification in telemedicine applications to effectively engage the elderly user by increasing and maintaining adherence. We performed a broad Internet search using scientific and nonscientific search engines and included information that described either of the following subjects: the conceptualization of gamification, methods to engage elderly users through gamification, or user classification theories for tailored game content. Our search showed two main approaches concerning frameworks for gamification: from business practices, which mostly aim for more revenue, emerge an applied approach, while academia frameworks are developed incorporating theories on motivation while often aiming for lasting engagement. The search provided limited information regarding the application of gamification to engage elderly users, and a significant gap in knowledge on the effectiveness of a gamified application in practice. Several approaches for classifying users in general were found, based on archetypes and reasons to play, and we present them along with their corresponding taxonomies. The overview we created indicates great connectivity between these taxonomies. Gamification frameworks have been developed from different backgrounds-business and academia-but rarely target the elderly user. The effectiveness of user classifications for tailored game content in this context is not yet known. As a next step, we propose the development of a framework based on the hypothesized existence of a relation between preference for game content and personality.

  12. BIOMedical Search Engine Framework: Lightweight and customized implementation of domain-specific biomedical search engines.

    PubMed

    Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália

    2016-07-01

    Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations meaningful to that particular scope of research. Conversely, indirect concept associations, i.e. concepts related by other intermediary concepts, can be useful to integrate information from different studies and look into non-trivial relations. The BIOMedical Search Engine Framework supports the development of domain-specific search engines. The key strengths of the framework are modularity and extensibilityin terms of software design, the use of open-source consolidated Web technologies, and the ability to integrate any number of biomedical text mining tools and information resources. Currently, the Smart Drug Search keeps over 1,186,000 documents, containing more than 11,854,000 annotations for 77,200 different concepts. The Smart Drug Search is publicly accessible at http://sing.ei.uvigo.es/sds/. The BIOMedical Search Engine Framework is freely available for non-commercial use at https://github.com/agjacome/biomsef. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Charter Schools and the Teacher Job Search in Michigan

    ERIC Educational Resources Information Center

    Cannata, Marisa

    2010-01-01

    This paper examines the position of charter schools in prospective elementary teachers' job search decisions. Using a labor market segmentation framework, it explores teacher applicants' decisions to apply to charter schools. The data come from a mixed-methods longitudinal study of prospective teachers looking for their first job. This paper finds…

  14. A Competitive and Experiential Assignment in Search Engine Optimization Strategy

    ERIC Educational Resources Information Center

    Clarke, Theresa B.; Clarke, Irvine, III

    2014-01-01

    Despite an increase in ad spending and demand for employees with expertise in search engine optimization (SEO), methods for teaching this important marketing strategy have received little coverage in the literature. Using Bloom's cognitive goals hierarchy as a framework, this experiential assignment provides a process for educators who may be new…

  15. Probabilistic consensus scoring improves tandem mass spectrometry peptide identification.

    PubMed

    Nahnsen, Sven; Bertsch, Andreas; Rahnenführer, Jörg; Nordheim, Alfred; Kohlbacher, Oliver

    2011-08-05

    Database search is a standard technique for identifying peptides from their tandem mass spectra. To increase the number of correctly identified peptides, we suggest a probabilistic framework that allows the combination of scores from different search engines into a joint consensus score. Central to the approach is a novel method to estimate scores for peptides not found by an individual search engine. This approach allows the estimation of p-values for each candidate peptide and their combination across all search engines. The consensus approach works better than any single search engine across all different instrument types considered in this study. Improvements vary strongly from platform to platform and from search engine to search engine. Compared to the industry standard MASCOT, our approach can identify up to 60% more peptides. The software for consensus predictions is implemented in C++ as part of OpenMS, a software framework for mass spectrometry. The source code is available in the current development version of OpenMS and can easily be used as a command line application or via a graphical pipeline designer TOPPAS.

  16. Children's Programs: A Comparative Evaluation Framework and Five Illustrations. Briefing Report to the Ranking Minority Member, Select Committee on Children, Youth, and Families, House of Representatives.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Program Evaluation and Methodology Div.

    This general program evaluation framework provides a wide range of criteria that can be applied in the evaluation of diverse federal progams. The framework was developed from a literature search on program evaluation methods and their use, the experiences of the United States Government Accounting Office (GAO), and consideration of the types of…

  17. Adapting the coping in deliberation (CODE) framework: a multi-method approach in the context of familial ovarian cancer risk management.

    PubMed

    Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate

    2014-11-01

    To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. In Search of the Reason for the Breathing Effect of MIL53 Metal-Organic Framework: An ab Initio Multiconfigurational Study.

    PubMed

    Weser, Oskar; Veryazov, Valera

    2017-01-01

    Multiconfigurational methods are applied to study electronic properties and structural changes in the highly flexible metal-organic framework MIL53(Cr). Via calculated bending potentials of angles, that change the most during phase transition, it is verified that the high flexibility of this material is not a question about special electronic properties in the coordination chemistry, but about overall linking of the framework. The complex posseses a demanding electronic structure with delocalized spin density, antifferomagnetic coupling and high multi-state character requiring multiconfigurational methods. Calculated properties are in good agreement with known experimental values confirming our chosen methods.

  19. Infodemiology and Infoveillance: Framework for an Emerging Set of Public Health Informatics Methods to Analyze Search, Communication and Publication Behavior on the Internet

    PubMed Central

    2009-01-01

    Infodemiology can be defined as the science of distribution and determinants of information in an electronic medium, specifically the Internet, or in a population, with the ultimate aim to inform public health and public policy. Infodemiology data can be collected and analyzed in near real time. Examples for infodemiology applications include: the analysis of queries from Internet search engines to predict disease outbreaks (eg. influenza); monitoring peoples' status updates on microblogs such as Twitter for syndromic surveillance; detecting and quantifying disparities in health information availability; identifying and monitoring of public health relevant publications on the Internet (eg. anti-vaccination sites, but also news articles or expert-curated outbreak reports); automated tools to measure information diffusion and knowledge translation, and tracking the effectiveness of health marketing campaigns. Moreover, analyzing how people search and navigate the Internet for health-related information, as well as how they communicate and share this information, can provide valuable insights into health-related behavior of populations. Seven years after the infodemiology concept was first introduced, this paper revisits the emerging fields of infodemiology and infoveillance and proposes an expanded framework, introducing some basic metrics such as information prevalence, concept occurrence ratios, and information incidence. The framework distinguishes supply-based applications (analyzing what is being published on the Internet, eg. on Web sites, newsgroups, blogs, microblogs and social media) from demand-based methods (search and navigation behavior), and further distinguishes passive from active infoveillance methods. Infodemiology metrics follow population health relevant events or predict them. Thus, these metrics and methods are potentially useful for public health practice and research, and should be further developed and standardized. PMID:19329408

  20. Infodemiology and infoveillance: framework for an emerging set of public health informatics methods to analyze search, communication and publication behavior on the Internet.

    PubMed

    Eysenbach, Gunther

    2009-03-27

    Infodemiology can be defined as the science of distribution and determinants of information in an electronic medium, specifically the Internet, or in a population, with the ultimate aim to inform public health and public policy. Infodemiology data can be collected and analyzed in near real time. Examples for infodemiology applications include the analysis of queries from Internet search engines to predict disease outbreaks (eg. influenza), monitoring peoples' status updates on microblogs such as Twitter for syndromic surveillance, detecting and quantifying disparities in health information availability, identifying and monitoring of public health relevant publications on the Internet (eg. anti-vaccination sites, but also news articles or expert-curated outbreak reports), automated tools to measure information diffusion and knowledge translation, and tracking the effectiveness of health marketing campaigns. Moreover, analyzing how people search and navigate the Internet for health-related information, as well as how they communicate and share this information, can provide valuable insights into health-related behavior of populations. Seven years after the infodemiology concept was first introduced, this paper revisits the emerging fields of infodemiology and infoveillance and proposes an expanded framework, introducing some basic metrics such as information prevalence, concept occurrence ratios, and information incidence. The framework distinguishes supply-based applications (analyzing what is being published on the Internet, eg. on Web sites, newsgroups, blogs, microblogs and social media) from demand-based methods (search and navigation behavior), and further distinguishes passive from active infoveillance methods. Infodemiology metrics follow population health relevant events or predict them. Thus, these metrics and methods are potentially useful for public health practice and research, and should be further developed and standardized.

  1. Can we decide which outcomes should be measured in every clinical trial? A scoping review of the existing conceptual frameworks and processes to develop core outcome sets.

    PubMed

    Idzerda, Leanne; Rader, Tamara; Tugwell, Peter; Boers, Maarten

    2014-05-01

    The usefulness of randomized control trials to advance clinical care depends upon the outcomes reported, but disagreement on the choice of outcome measures has resulted in inconsistency and the potential for reporting bias. One solution to this problem is the development of a core outcome set: a minimum set of outcome measures deemed critical for clinical decision making. Within rheumatology the Outcome Measures in Rheumatology (OMERACT) initiative has pioneered the development of core outcome sets since 1992. As the number of diseases addressed by OMERACT has increased and its experience in formulating core sets has grown, clarification and update of the conceptual framework and formulation of a more explicit process of area/domain core set development has become necessary. As part of the update process of the OMERACT Filter criteria to version 2, a literature review was undertaken to compare and contrast the OMERACT conceptual framework with others within and outside rheumatology. A scoping search was undertaken to examine the extent, range, and nature of conceptual frameworks for core set outcome selection in health. We searched the following resources: Cochrane Library Methods Group Register; Medline; Embase; PsycInfo; Environmental Studies and Policy Collection; and ABI/INFORM Global. We also conducted a targeted Google search. Five conceptual frameworks were identified: the WHO tripartite definition of health; the 5 Ds (discomfort, disability, drug toxicity, dollar cost, and death); the International Classification of Functioning (ICF); PROMIS (Patient-Reported Outcomes Measurement System); and the Outcomes Hierarchy. Of these, only the 5 Ds and ICF frameworks have been systematically applied in core set development. Outside the area of rheumatology, several core sets were identified; these had been developed through a limited range of consensus-based methods with varying degrees of methodological rigor. None applied a framework to ensure content validity of the end product. This scoping review reinforced the need for clear methods and standards for core set development. Based on these findings, OMERACT will make its own conceptual framework and working process more explicit. Proposals for how to achieve this were discussed at the OMERACT 11 conference.

  2. Image Search Reranking With Hierarchical Topic Awareness.

    PubMed

    Tian, Xinmei; Yang, Linjun; Lu, Yijuan; Tian, Qi; Tao, Dacheng

    2015-10-01

    With much attention from both academia and industrial communities, visual search reranking has recently been proposed to refine image search results obtained from text-based image search engines. Most of the traditional reranking methods cannot capture both relevance and diversity of the search results at the same time. Or they ignore the hierarchical topic structure of search result. Each topic is treated equally and independently. However, in real applications, images returned for certain queries are naturally in hierarchical organization, rather than simple parallel relation. In this paper, a new reranking method "topic-aware reranking (TARerank)" is proposed. TARerank describes the hierarchical topic structure of search results in one model, and seamlessly captures both relevance and diversity of the image search results simultaneously. Through a structured learning framework, relevance and diversity are modeled in TARerank by a set of carefully designed features, and then the model is learned from human-labeled training samples. The learned model is expected to predict reranking results with high relevance and diversity for testing queries. To verify the effectiveness of the proposed method, we collect an image search dataset and conduct comparison experiments on it. The experimental results demonstrate that the proposed TARerank outperforms the existing relevance-based and diversified reranking methods.

  3. Reliability-based design optimization of reinforced concrete structures including soil-structure interaction using a discrete gravitational search algorithm and a proposed metamodel

    NASA Astrophysics Data System (ADS)

    Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.

    2013-10-01

    A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.

  4. A neotropical Miocene pollen database employing image-based search and semantic modeling1

    PubMed Central

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-01-01

    • Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648

  5. Medical Subject Headings (MeSH) for indexing and retrieving open-source healthcare data.

    PubMed

    Marc, David T; Khairat, Saif S

    2014-01-01

    The US federal government initiated the Open Government Directive where federal agencies are required to publish high value datasets so that they are available to the public. Data.gov and the community site Healthdata.gov were initiated to disperse such datasets. However, data searches and retrieval for these sites are keyword driven and severely limited in performance. The purpose of this paper is to address the issue of extracting relevant open-source data by proposing a method of adopting the MeSH framework for indexing and data retrieval. A pilot study was conducted to compare the performance of traditional keywords to MeSH terms for retrieving relevant open-source datasets related to "mortality". The MeSH framework resulted in greater sensitivity with comparable specificity to the keyword search. MeSH showed promise as a method for indexing and retrieving data, yet future research should conduct a larger scale evaluation of the performance of the MeSH framework for retrieving relevant open-source healthcare datasets.

  6. The EMBL-EBI bioinformatics web and programmatic tools framework.

    PubMed

    Li, Weizhong; Cowley, Andrew; Uludag, Mahmut; Gur, Tamer; McWilliam, Hamish; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Lopez, Rodrigo

    2015-07-01

    Since 2009 the EMBL-EBI Job Dispatcher framework has provided free access to a range of mainstream sequence analysis applications. These include sequence similarity search services (https://www.ebi.ac.uk/Tools/sss/) such as BLAST, FASTA and PSI-Search, multiple sequence alignment tools (https://www.ebi.ac.uk/Tools/msa/) such as Clustal Omega, MAFFT and T-Coffee, and other sequence analysis tools (https://www.ebi.ac.uk/Tools/pfa/) such as InterProScan. Through these services users can search mainstream sequence databases such as ENA, UniProt and Ensembl Genomes, utilising a uniform web interface or systematically through Web Services interfaces (https://www.ebi.ac.uk/Tools/webservices/) using common programming languages, and obtain enriched results with novel visualisations. Integration with EBI Search (https://www.ebi.ac.uk/ebisearch/) and the dbfetch retrieval service (https://www.ebi.ac.uk/Tools/dbfetch/) further expands the usefulness of the framework. New tools and updates such as NCBI BLAST+, InterProScan 5 and PfamScan, new categories such as RNA analysis tools (https://www.ebi.ac.uk/Tools/rna/), new databases such as ENA non-coding, WormBase ParaSite, Pfam and Rfam, and new workflow methods, together with the retirement of depreciated services, ensure that the framework remains relevant to today's biological community. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. An approach in building a chemical compound search engine in oracle database.

    PubMed

    Wang, H; Volarath, P; Harrison, R

    2005-01-01

    A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.

  8. A neotropical Miocene pollen database employing image-based search and semantic modeling.

    PubMed

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-08-01

    Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.

  9. Generalized Minimum-Time Follow-up Approaches Applied to Tasking Electro-Optical Sensor Tasking

    NASA Astrophysics Data System (ADS)

    Murphy, T. S.; Holzinger, M. J.

    This work proposes a methodology for tasking of sensors to search an area of state space for a particular object, group of objects, or class of objects. This work creates a general unified mathematical framework for analyzing reacquisition, search, scheduling, and custody operations. In particular, this work looks at searching for unknown space object(s) with prior knowledge in the form of a set, which can be defined via an uncorrelated track, region of state space, or a variety of other methods. The follow-up tasking can occur from a variable location and time, which often requires searching a large region of the sky. This work analyzes the area of a search region over time to inform a time optimal search method. Simulation work looks at analyzing search regions relative to a particular sensor, and testing a tasking algorithm to search through the region. The tasking algorithm is also validated on a reacquisition problem with a telescope system at Georgia Tech.

  10. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  11. Reverse Nearest Neighbor Search on a Protein-Protein Interaction Network to Infer Protein-Disease Associations.

    PubMed

    Suratanee, Apichat; Plaimas, Kitiporn

    2017-01-01

    The associations between proteins and diseases are crucial information for investigating pathological mechanisms. However, the number of known and reliable protein-disease associations is quite small. In this study, an analysis framework to infer associations between proteins and diseases was developed based on a large data set of a human protein-protein interaction network integrating an effective network search, namely, the reverse k -nearest neighbor (R k NN) search. The R k NN search was used to identify an impact of a protein on other proteins. Then, associations between proteins and diseases were inferred statistically. The method using the R k NN search yielded a much higher precision than a random selection, standard nearest neighbor search, or when applying the method to a random protein-protein interaction network. All protein-disease pair candidates were verified by a literature search. Supporting evidence for 596 pairs was identified. In addition, cluster analysis of these candidates revealed 10 promising groups of diseases to be further investigated experimentally. This method can be used to identify novel associations to better understand complex relationships between proteins and diseases.

  12. Evolutionary-inspired probabilistic search for enhancing sampling of local minima in the protein energy surface

    PubMed Central

    2012-01-01

    Background Despite computational challenges, elucidating conformations that a protein system assumes under physiologic conditions for the purpose of biological activity is a central problem in computational structural biology. While these conformations are associated with low energies in the energy surface that underlies the protein conformational space, few existing conformational search algorithms focus on explicitly sampling low-energy local minima in the protein energy surface. Methods This work proposes a novel probabilistic search framework, PLOW, that explicitly samples low-energy local minima in the protein energy surface. The framework combines algorithmic ingredients from evolutionary computation and computational structural biology to effectively explore the subspace of local minima. A greedy local search maps a conformation sampled in conformational space to a nearby local minimum. A perturbation move jumps out of a local minimum to obtain a new starting conformation for the greedy local search. The process repeats in an iterative fashion, resulting in a trajectory-based exploration of the subspace of local minima. Results and conclusions The analysis of PLOW's performance shows that, by navigating only the subspace of local minima, PLOW is able to sample conformations near a protein's native structure, either more effectively or as well as state-of-the-art methods that focus on reproducing the native structure for a protein system. Analysis of the actual subspace of local minima shows that PLOW samples this subspace more effectively that a naive sampling approach. Additional theoretical analysis reveals that the perturbation function employed by PLOW is key to its ability to sample a diverse set of low-energy conformations. This analysis also suggests directions for further research and novel applications for the proposed framework. PMID:22759582

  13. Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search

    PubMed Central

    Fang, Leyuan; Cunefare, David; Wang, Chong; Guymer, Robyn H.; Li, Shutao; Farsiu, Sina

    2017-01-01

    We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique. PMID:28663902

  14. Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search.

    PubMed

    Fang, Leyuan; Cunefare, David; Wang, Chong; Guymer, Robyn H; Li, Shutao; Farsiu, Sina

    2017-05-01

    We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique.

  15. Cochrane Qualitative and Implementation Methods Group guidance series-paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis.

    PubMed

    Harris, Janet L; Booth, Andrew; Cargo, Margaret; Hannes, Karin; Harden, Angela; Flemming, Kate; Garside, Ruth; Pantoja, Tomas; Thomas, James; Noyes, Jane

    2018-05-01

    This paper updates previous Cochrane guidance on question formulation, searching, and protocol development, reflecting recent developments in methods for conducting qualitative evidence syntheses to inform Cochrane intervention reviews. Examples are used to illustrate how decisions about boundaries for a review are formed via an iterative process of constructing lines of inquiry and mapping the available information to ascertain whether evidence exists to answer questions related to effectiveness, implementation, feasibility, appropriateness, economic evidence, and equity. The process of question formulation allows reviewers to situate the topic in relation to how it informs and explains effectiveness, using the criterion of meaningfulness, appropriateness, feasibility, and implementation. Questions related to complex questions and interventions can be structured by drawing on an increasingly wide range of question frameworks. Logic models and theoretical frameworks are useful tools for conceptually mapping the literature to illustrate the complexity of the phenomenon of interest. Furthermore, protocol development may require iterative question formulation and searching. Consequently, the final protocol may function as a guide rather than a prescriptive route map, particularly in qualitative reviews that ask more exploratory and open-ended questions. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Evaluating a Federated Medical Search Engine

    PubMed Central

    Belden, J.; Williams, J.; Richardson, B.; Schuster, K.

    2014-01-01

    Summary Background Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. Objectives To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. Methods This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Results Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants’ expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. Conclusions The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for system improvement. PMID:25298813

  17. BIRAM: a content-based image retrieval framework for medical images

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; Furuie, Sergio S.

    2006-03-01

    In the medical field, digital images are becoming more and more important for diagnostics and therapy of the patients. At the same time, the development of new technologies has increased the amount of image data produced in a hospital. This creates a demand for access methods that offer more than text-based queries for retrieval of the information. In this paper is proposed a framework for the retrieval of medical images that allows the use of different algorithms for the search of medical images by similarity. The framework also enables the search for textual information from an associated medical report and DICOM header information. The proposed system can be used for support of clinical decision making and is intended to be integrated with an open source picture, archiving and communication systems (PACS). The BIRAM has the following advantages: (i) Can receive several types of algorithms for image similarity search; (ii) Allows the codification of the report according to a medical dictionary, improving the indexing of the information and retrieval; (iii) The algorithms can be selectively applied to images with the appropriated characteristics, for instance, only in magnetic resonance images. The framework was implemented in Java language using a MS Access 97 database. The proposed framework can still be improved, by the use of regions of interest (ROI), indexing with slim-trees and integration with a PACS Server.

  18. A Systematic Review Exploring the Social Cognitive Theory of Self-Regulation as a Framework for Chronic Health Condition Interventions

    PubMed Central

    Tougas, Michelle E.; Hayden, Jill A.; McGrath, Patrick J.; Huguet, Anna; Rozario, Sharlene

    2015-01-01

    Background Theory is often recommended as a framework for guiding hypothesized mechanisms of treatment effect. However, there is limited guidance about how to use theory in intervention development. Methods We conducted a systematic review to provide an exemplar review evaluating the extent to which use of theory is identified and incorporated within existing interventions. We searched electronic databases PubMed, PsycINFO, CENTRAL, and EMBASE from inception to May 2014. We searched clinicaltrials.gov for registered protocols, reference lists of relevant systematic reviews and included studies, and conducted a citation search in Web of Science. We included peer-reviewed publications of interventions that referenced the social cognitive theory of self-regulation as a framework for interventions to manage chronic health conditions. Two reviewers independently assessed articles for eligibility. We contacted all authors of included studies for information detailing intervention content. We describe how often theory mechanisms were addressed by interventions, and report intervention characteristics used to address theory. Results Of 202 articles that reported using the social cognitive theory of self-regulation, 52% failed to incorporate self-monitoring, a main theory component, and were therefore excluded. We included 35 interventions that adequately used the theory framework. Intervention characteristics were often poorly reported in peer-reviewed publications, 21 of 35 interventions incorporated characteristics that addressed each of the main theory components. Each intervention addressed, on average, six of eight self-monitoring mechanisms, two of five self-judgement mechanisms, and one of three self-evaluation mechanisms. The self-monitoring mechanisms ‘Feedback’ and ‘Consistency’ were addressed by all interventions, whereas the self-evaluation mechanisms ‘Self-incentives’ and ‘External rewards’ were addressed by six and four interventions, respectively. The present review establishes that systematic review is a feasible method of identifying use of theory as a conceptual framework for existing interventions. We identified the social cognitive theory of self-regulation as a feasible framework to guide intervention development for chronic health conditions. PMID:26252889

  19. Systematic searching for theory to inform systematic reviews: is it feasible? Is it desirable?

    PubMed

    Booth, Andrew; Carroll, Christopher

    2015-09-01

    In recognising the potential value of theory in understanding how interventions work comes a challenge - how to make identification of theory less haphazard? To explore the feasibility of systematic identification of theory. We searched PubMed for published reviews (1998-2012) that had explicitly sought to identify theory. Systematic searching may be characterised by a structured question, methodological filters and an itemised search procedure. We constructed a template (BeHEMoTh - Behaviour of interest; Health context; Exclusions; Models or Theories) for use when systematically identifying theory. The authors tested the template within two systematic reviews. Of 34 systematic reviews, only 12 reviews (35%) reported a method for identifying theory. Nineteen did not specify how they identified studies containing theory. Data were unavailable for three reviews. Candidate terms include concept(s)/conceptual, framework(s), model(s), and theory/theories/theoretical. Information professionals must overcome inadequate reporting and the use of theory out of context. The review team faces an additional concern in lack of 'theory fidelity'. Based on experience with two systematic reviews, the BeHEMoTh template and procedure offers a feasible and useful approach for identification of theory. Applications include realist synthesis, framework synthesis or review of complex interventions. The procedure requires rigorous evaluation. © 2015 Health Libraries Group.

  20. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.

  1. A Framework and Methodology for Navigating Disaster and Global Health in Crisis Literature

    PubMed Central

    Chan, Jennifer L.; Burkle, Frederick M.

    2013-01-01

    Both ‘disasters’ and ‘global health in crisis’ research has dramatically grown due to the ever-increasing frequency and magnitude of crises around the world. Large volumes of peer-reviewed literature are not only a testament to the field’s value and evolution, but also present an unprecedented outpouring of seemingly unmanageable information across a wide array of crises and disciplines. Disaster medicine, health and humanitarian assistance, global health and public health disaster literature all lie within the disaster and global health in crisis literature spectrum and are increasingly accepted as multidisciplinary and transdisciplinary disciplines. Researchers, policy makers, and practitioners now face a new challenge; that of accessing this expansive literature for decision-making and exploring new areas of research. Individuals are also reaching beyond the peer-reviewed environment to grey literature using search engines like Google Scholar to access policy documents, consensus reports and conference proceedings. What is needed is a method and mechanism with which to search and retrieve relevant articles from this expansive body of literature. This manuscript presents both a framework and workable process for a diverse group of users to navigate the growing peer-reviewed and grey disaster and global health in crises literature. Methods: Disaster terms from textbooks, peer-reviewed and grey literature were used to design a framework of thematic clusters and subject matter ‘nodes’. A set of 84 terms, selected from 143 curated terms was organized within each node reflecting topics within the disaster and global health in crisis literature. Terms were crossed with one another and the term ‘disaster’. The results were formatted into tables and matrices. This process created a roadmap of search terms that could be applied to the PubMed database. Each search in the matrix or table results in a listed number of articles. This process was applied to literature from PubMed from 2005-2011. A complementary process was also applied to Google Scholar using the same framework of clusters, nodes, and terms expanding the search process to include the broader grey literature assets. Results: A framework of four thematic clusters and twelve subject matter nodes were designed to capture diverse disaster and global health in crisis-related content. From 2005-2011 there were 18,660 articles referring to the term [disaster]. Restricting the search to human research, MeSH, and English language there remained 7,736 identified articles representing an unmanageable number to adequately process for research, policy or best practices. However, using the crossed search and matrix process revealed further examples of robust realms of research in disasters, emergency medicine, EMS, public health and global health. Examples of potential gaps in current peer-reviewed disaster and global health in crisis literature were identified as mental health, elderly care, and alternate sites of care. The same framework and process was then applied to Google Scholar, specifically for topics that resulted in few PubMed search returns. When applying the same framework and process to the Google Scholar example searches retrieved unique peer-reviewed articles not identified in PubMed and documents including books, governmental documents and consensus papers. Conclusions: The proposed framework, methodology and process using four clusters, twelve nodes and a matrix and table process applied to PubMed and Google Scholar unlocks otherwise inaccessible opportunities to better navigate the massively growing body of peer-reviewed disaster and global health in crises literature. This approach will assist researchers, policy makers, and practitioners to generate future research questions, report on the overall evolution of the disaster and global health in crisis field and further guide disaster planning, prevention, preparedness, mitigation response and recovery. PMID:23591457

  2. Assessing the Spread and Uptake of a Framework for Introducing and Evaluating Advanced Practice Nursing Roles.

    PubMed

    Boyko, Jennifer A; Carter, Nancy; Bryant-Lukosius, Denise

    2016-08-01

    Health system researchers must ensure that the products of their work meet the needs of various stakeholder groups (e.g., patients, practitioners, and policy makers). Evidence-based frameworks can support the uptake and spread of research evidence; however, their existence as knowledge translation tools does not ensure their uptake and it is difficult to ascertain their spread into research, practice, and policy using existing methods. The purpose of this article is to report results of a study on the spread and uptake of an evidence-based framework (i.e., the participatory, evidence-based, patient-focused process for advanced practice nursing [PEPPA] framework) into research, practice, and policies relevant to the introduction and evaluation of advanced practice nursing roles. We also reflect on the utility of using a modified citation methodology to evaluate knowledge translation efforts. We searched four databases for literature published between 2004 and 2014 citing the original paper in which the PEPPA framework was published, and carried out an Internet search for grey literature using keywords. Relevant data were extracted from sources and organized using NVivo software. We analysed results descriptively. Our search yielded 164 unique sources of which 69.5% were from published literature and the majority (83.4%) of these were published in nursing journals. Most frequently (71.5%), the framework was used by researchers and students in research studies. A smaller number of citations (11.3%) reflected use of the PEPPA framework in practice settings with a focus on role development, implementation, evaluation, or a combination of these. This study demonstrates that the PEPPA framework has been used to varying degrees as intended, and provides guidance on how to evaluate the spread and uptake of research outputs (e.g., theoretical frameworks). Further research is needed about ways to determine whether evidence-informed research tools such as frameworks have been taken up successfully into practice and policy contexts. © 2016 Sigma Theta Tau International.

  3. Inference-Based Similarity Search in Randomized Montgomery Domains for Privacy-Preserving Biometric Identification.

    PubMed

    Wang, Yi; Wan, Jianwu; Guo, Jun; Cheung, Yiu-Ming; Yuen, Pong C; Yi Wang; Jianwu Wan; Jun Guo; Yiu-Ming Cheung; Yuen, Pong C; Cheung, Yiu-Ming; Guo, Jun; Yuen, Pong C; Wan, Jianwu; Wang, Yi

    2018-07-01

    Similarity search is essential to many important applications and often involves searching at scale on high-dimensional data based on their similarity to a query. In biometric applications, recent vulnerability studies have shown that adversarial machine learning can compromise biometric recognition systems by exploiting the biometric similarity information. Existing methods for biometric privacy protection are in general based on pairwise matching of secured biometric templates and have inherent limitations in search efficiency and scalability. In this paper, we propose an inference-based framework for privacy-preserving similarity search in Hamming space. Our approach builds on an obfuscated distance measure that can conceal Hamming distance in a dynamic interval. Such a mechanism enables us to systematically design statistically reliable methods for retrieving most likely candidates without knowing the exact distance values. We further propose to apply Montgomery multiplication for generating search indexes that can withstand adversarial similarity analysis, and show that information leakage in randomized Montgomery domains can be made negligibly small. Our experiments on public biometric datasets demonstrate that the inference-based approach can achieve a search accuracy close to the best performance possible with secure computation methods, but the associated cost is reduced by orders of magnitude compared to cryptographic primitives.

  4. Stepwise and stagewise approaches for spatial cluster detection

    PubMed Central

    Xu, Jiale

    2016-01-01

    Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either hypothesis testing framework or Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic area. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power of detections. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. PMID:27246273

  5. Stepwise and stagewise approaches for spatial cluster detection.

    PubMed

    Xu, Jiale; Gangnon, Ronald E

    2016-05-01

    Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either a hypothesis testing framework or a Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with a tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic areas. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Policy guidance on threats to legislative interventions in public health: a realist synthesis

    PubMed Central

    2011-01-01

    Background Legislation is one of the most powerful weapons for improving population health and is often used by policy and decision makers. Little research exists to guide them as to whether legislation is feasible and/or will succeed. We aimed to produce a coherent and transferable evidence based framework of threats to legislative interventions to assist the decision making process and to test this through the 'case study' of legislation to ban smoking in cars carrying children. Methods We conceptualised legislative interventions as a complex social interventions and so used the realist synthesis method to systematically review the literature for evidence. 99 articles were found through searches on five electronic databases (MEDLINE, HMIC, EMBASE, PsychINFO, Social Policy and Practice) and iterative purposive searching. Our initial searches sought any studies that contained information on smoking in vehicles carrying children. Throughout the review we continued where needed to search for additional studies of any type that would conceptually contribute to helping build and/or test our framework. Results Our framework identified a series of transferable threats to public health legislation. When applied to smoking bans in vehicles; problem misidentification; public support; opposition; and enforcement issues were particularly prominent threats. Our framework enabled us to understand and explain the nature of each threat and to infer the most likely outcome if such legislation were to be proposed in a jurisdiction where no such ban existed. Specifically, the micro-environment of a vehicle can contain highly hazardous levels of second hand smoke. Public support for such legislation is high amongst smokers and non-smokers and their underlying motivations were very similar - wanting to practice the Millian principle of protecting children from harm. Evidence indicated that the tobacco industry was not likely to oppose legislation and arguments that such a law would be 'unenforceable' were unfounded. Conclusion It is possible to develop a coherent and transferable evidence based framework of the ideas and assumptions behind the threats to legislative intervention that may assist policy and decision makers to analyse and judge if legislation is feasible and/or likely to succeed. PMID:21477347

  7. A Review of Auditing Methods Applied to the Content of Controlled Biomedical Terminologies

    PubMed Central

    Zhu, Xinxin; Fan, Jung-Wei; Baorto, David M.; Weng, Chunhua; Cimino, James J.

    2012-01-01

    Although controlled biomedical terminologies have been with us for centuries, it is only in the last couple of decades that close attention has been paid to the quality of these terminologies. The result of this attention has been the development of auditing methods that apply formal methods to assessing whether terminologies are complete and accurate. We have performed an extensive literature review to identify published descriptions of these methods and have created a framework for characterizing them. The framework considers manual, systematic and heuristic methods that use knowledge (within or external to the terminology) to measure quality factors of different aspects of the terminology content (terms, semantic classification, and semantic relationships). The quality factors examined included concept orientation, consistency, non-redundancy, soundness and comprehensive coverage. We reviewed 130 studies that were retrieved based on keyword search on publications in PubMed, and present our assessment of how they fit into our framework. We also identify which terminologies have been audited with the methods and provide examples to illustrate each part of the framework. PMID:19285571

  8. Nested Dissection Interface Reconstruction in Pececillo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jibben, Zechariah Joel

    A nested dissection method for interface reconstruction in a volume tracking framework has been implemented in Pececillo. This method provides a significant improvement over the traditional onion-skin method, which does not appropriately handle T-shaped multimaterial intersections and dynamic contact lines present in additive manufacturing simulations. The resulting implementation lays the groundwork for further re- search in numerical contact angle estimates.

  9. Flexible patient information search and retrieval framework: pilot implementation

    NASA Astrophysics Data System (ADS)

    Erdal, Selnur; Catalyurek, Umit V.; Saltz, Joel; Kamal, Jyoti; Gurcan, Metin N.

    2007-03-01

    Medical centers collect and store significant amount of valuable data pertaining to patients' visit in the form of medical free-text. In addition, standardized diagnosis codes (International Classification of Diseases, Ninth Revision, Clinical Modification: ICD9-CM) related to those dictated reports are usually available. In this work, we have created a framework where image searches could be initiated through a combination of free-text reports as well as ICD9 codes. This framework enables more comprehensive search on existing large sets of patient data in a systematic way. The free text search is enriched by computer-aided inclusion of additional search terms enhanced by a thesaurus. This combination of enriched search allows users to access to a larger set of relevant results from a patient-centric PACS in a simpler way. Therefore, such framework is of particular use in tasks such as gathering images for desired patient populations, building disease models, and so on. As the motivating application of our framework, we implemented a search engine. This search engine processed two years of patient data from the OSU Medical Center's Information Warehouse and identified lung nodule location information using a combination of UMLS Meta-Thesaurus enhanced text report searches along with ICD9 code searches on patients that have been discharged. Five different queries with various ICD9 codes involving lung cancer were carried out on 172552 cases. Each search was completed under a minute on average per ICD9 code and the inclusion of UMLS thesaurus increased the number of relevant cases by 45% on average.

  10. Optimal Search for an Astrophysical Gravitational-Wave Background

    NASA Astrophysics Data System (ADS)

    Smith, Rory; Thrane, Eric

    2018-04-01

    Roughly every 2-10 min, a pair of stellar-mass black holes merge somewhere in the Universe. A small fraction of these mergers are detected as individually resolvable gravitational-wave events by advanced detectors such as LIGO and Virgo. The rest contribute to a stochastic background. We derive the statistically optimal search strategy (producing minimum credible intervals) for a background of unresolved binaries. Our method applies Bayesian parameter estimation to all available data. Using Monte Carlo simulations, we demonstrate that the search is both "safe" and effective: it is not fooled by instrumental artifacts such as glitches and it recovers simulated stochastic signals without bias. Given realistic assumptions, we estimate that the search can detect the binary black hole background with about 1 day of design sensitivity data versus ≈40 months using the traditional cross-correlation search. This framework independently constrains the merger rate and black hole mass distribution, breaking a degeneracy present in the cross-correlation approach. The search provides a unified framework for population studies of compact binaries, which is cast in terms of hyperparameter estimation. We discuss a number of extensions and generalizations, including application to other sources (such as binary neutron stars and continuous-wave sources), simultaneous estimation of a continuous Gaussian background, and applications to pulsar timing.

  11. Garbage in, Garbage Out: Data Collection, Quality Assessment and Reporting Standards for Social Media Data Use in Health Research, Infodemiology and Digital Disease Detection

    PubMed Central

    Huang, Jidong; Emery, Sherry

    2016-01-01

    Background Social media have transformed the communications landscape. People increasingly obtain news and health information online and via social media. Social media platforms also serve as novel sources of rich observational data for health research (including infodemiology, infoveillance, and digital disease detection detection). While the number of studies using social data is growing rapidly, very few of these studies transparently outline their methods for collecting, filtering, and reporting those data. Keywords and search filters applied to social data form the lens through which researchers may observe what and how people communicate about a given topic. Without a properly focused lens, research conclusions may be biased or misleading. Standards of reporting data sources and quality are needed so that data scientists and consumers of social media research can evaluate and compare methods and findings across studies. Objective We aimed to develop and apply a framework of social media data collection and quality assessment and to propose a reporting standard, which researchers and reviewers may use to evaluate and compare the quality of social data across studies. Methods We propose a conceptual framework consisting of three major steps in collecting social media data: develop, apply, and validate search filters. This framework is based on two criteria: retrieval precision (how much of retrieved data is relevant) and retrieval recall (how much of the relevant data is retrieved). We then discuss two conditions that estimation of retrieval precision and recall rely on—accurate human coding and full data collection—and how to calculate these statistics in cases that deviate from the two ideal conditions. We then apply the framework on a real-world example using approximately 4 million tobacco-related tweets collected from the Twitter firehose. Results We developed and applied a search filter to retrieve e-cigarette–related tweets from the archive based on three keyword categories: devices, brands, and behavior. The search filter retrieved 82,205 e-cigarette–related tweets from the archive and was validated. Retrieval precision was calculated above 95% in all cases. Retrieval recall was 86% assuming ideal conditions (no human coding errors and full data collection), 75% when unretrieved messages could not be archived, 86% assuming no false negative errors by coders, and 93% allowing both false negative and false positive errors by human coders. Conclusions This paper sets forth a conceptual framework for the filtering and quality evaluation of social data that addresses several common challenges and moves toward establishing a standard of reporting social data. Researchers should clearly delineate data sources, how data were accessed and collected, and the search filter building process and how retrieval precision and recall were calculated. The proposed framework can be adapted to other public social media platforms. PMID:26920122

  12. Improving sensitivity in proteome studies by analysis of false discovery rates for multiple search engines

    PubMed Central

    Jones, Andrew R.; Siepen, Jennifer A.; Hubbard, Simon J.; Paton, Norman W.

    2010-01-01

    Tandem mass spectrometry, run in combination with liquid chromatography (LC-MS/MS), can generate large numbers of peptide and protein identifications, for which a variety of database search engines are available. Distinguishing correct identifications from false positives is far from trivial because all data sets are noisy, and tend to be too large for manual inspection, therefore probabilistic methods must be employed to balance the trade-off between sensitivity and specificity. Decoy databases are becoming widely used to place statistical confidence in results sets, allowing the false discovery rate (FDR) to be estimated. It has previously been demonstrated that different MS search engines produce different peptide identification sets, and as such, employing more than one search engine could result in an increased number of peptides being identified. However, such efforts are hindered by the lack of a single scoring framework employed by all search engines. We have developed a search engine independent scoring framework based on FDR which allows peptide identifications from different search engines to be combined, called the FDRScore. We observe that peptide identifications made by three search engines are infrequently false positives, and identifications made by only a single search engine, even with a strong score from the source search engine, are significantly more likely to be false positives. We have developed a second score based on the FDR within peptide identifications grouped according to the set of search engines that have made the identification, called the combined FDRScore. We demonstrate by searching large publicly available data sets that the combined FDRScore can differentiate between between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine. PMID:19253293

  13. A method of searching for related literature on protein structure analysis by considering a user's intention

    PubMed Central

    2015-01-01

    Background In recent years, with advances in techniques for protein structure analysis, the knowledge about protein structure and function has been published in a vast number of articles. A method to search for specific publications from such a large pool of articles is needed. In this paper, we propose a method to search for related articles on protein structure analysis by using an article itself as a query. Results Each article is represented as a set of concepts in the proposed method. Then, by using similarities among concepts formulated from databases such as Gene Ontology, similarities between articles are evaluated. In this framework, the desired search results vary depending on the user's search intention because a variety of information is included in a single article. Therefore, the proposed method provides not only one input article (primary article) but also additional articles related to it as an input query to determine the search intention of the user, based on the relationship between two query articles. In other words, based on the concepts contained in the input article and additional articles, we actualize a relevant literature search that considers user intention by varying the degree of attention given to each concept and modifying the concept hierarchy graph. Conclusions We performed an experiment to retrieve relevant papers from articles on protein structure analysis registered in the Protein Data Bank by using three query datasets. The experimental results yielded search results with better accuracy than when user intention was not considered, confirming the effectiveness of the proposed method. PMID:25952498

  14. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed Central

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    OBJECTIVE: Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. DESIGN: This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. RESULTS: The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. CONCLUSIONS: This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources. PMID:8581554

  15. Energy Consumption Forecasting Using Semantic-Based Genetic Programming with Local Search Optimizer.

    PubMed

    Castelli, Mauro; Trujillo, Leonardo; Vanneschi, Leonardo

    2015-01-01

    Energy consumption forecasting (ECF) is an important policy issue in today's economies. An accurate ECF has great benefits for electric utilities and both negative and positive errors lead to increased operating costs. The paper proposes a semantic based genetic programming framework to address the ECF problem. In particular, we propose a system that finds (quasi-)perfect solutions with high probability and that generates models able to produce near optimal predictions also on unseen data. The framework blends a recently developed version of genetic programming that integrates semantic genetic operators with a local search method. The main idea in combining semantic genetic programming and a local searcher is to couple the exploration ability of the former with the exploitation ability of the latter. Experimental results confirm the suitability of the proposed method in predicting the energy consumption. In particular, the system produces a lower error with respect to the existing state-of-the art techniques used on the same dataset. More importantly, this case study has shown that including a local searcher in the geometric semantic genetic programming system can speed up the search process and can result in fitter models that are able to produce an accurate forecasting also on unseen data.

  16. Inferring Mechanisms of Compensation from E-MAP and SGA Data Using Local Search Algorithms for Max Cut

    NASA Astrophysics Data System (ADS)

    Leiserson, Mark D. M.; Tatar, Diana; Cowen, Lenore J.; Hescott, Benjamin J.

    A new method based on a mathematically natural local search framework for max cut is developed to uncover functionally coherent module and BPM motifs in high-throughput genetic interaction data. Unlike previous methods which also consider physical protein-protein interaction data, our method utilizes genetic interaction data only; this becomes increasingly important as high-throughput genetic interaction data is becoming available in settings where less is known about physical interaction data. We compare modules and BPMs obtained to previous methods and across different datasets. Despite needing no physical interaction information, the BPMs produced by our method are competitive with previous methods. Biological findings include a suggested global role for the prefoldin complex and a SWR subcomplex in pathway buffering in the budding yeast interactome.

  17. Inferring mechanisms of compensation from E-MAP and SGA data using local search algorithms for max cut.

    PubMed

    Leiserson, Mark D M; Tatar, Diana; Cowen, Lenore J; Hescott, Benjamin J

    2011-11-01

    A new method based on a mathematically natural local search framework for max cut is developed to uncover functionally coherent module and BPM motifs in high-throughput genetic interaction data. Unlike previous methods, which also consider physical protein-protein interaction data, our method utilizes genetic interaction data only; this becomes increasingly important as high-throughput genetic interaction data is becoming available in settings where less is known about physical interaction data. We compare modules and BPMs obtained to previous methods and across different datasets. Despite needing no physical interaction information, the BPMs produced by our method are competitive with previous methods. Biological findings include a suggested global role for the prefoldin complex and a SWR subcomplex in pathway buffering in the budding yeast interactome.

  18. Secure Genomic Computation through Site-Wise Encryption

    PubMed Central

    Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu

    2015-01-01

    Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients’ genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds. PMID:26306278

  19. Secure Genomic Computation through Site-Wise Encryption.

    PubMed

    Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu

    2015-01-01

    Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients' genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds.

  20. A Model Independent S/W Framework for Search-Based Software Testing

    PubMed Central

    Baik, Jongmoon

    2014-01-01

    In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314

  1. Generalizing Backtrack-Free Search: A Framework for Search-Free Constraint Satisfaction

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari K.; Frank, Jeremy

    2000-01-01

    Tractable classes of constraint satisfaction problems are of great importance in artificial intelligence. Identifying and taking advantage of such classes can significantly speed up constraint problem solving. In addition, tractable classes are utilized in applications where strict worst-case performance guarantees are required, such as constraint-based plan execution. In this work, we present a formal framework for search-free (backtrack-free) constraint satisfaction. The framework is based on general procedures, rather than specific propagation techniques, and thus generalizes existing techniques in this area. We also relate search-free problem solving to the notion of decision sets and use the result to provide a constructive criterion that is sufficient to guarantee search-free problem solving.

  2. Multimedia content description framework

    NASA Technical Reports Server (NTRS)

    Bergman, Lawrence David (Inventor); Mohan, Rakesh (Inventor); Li, Chung-Sheng (Inventor); Smith, John Richard (Inventor); Kim, Michelle Yoonk Yung (Inventor)

    2003-01-01

    A framework is provided for describing multimedia content and a system in which a plurality of multimedia storage devices employing the content description methods of the present invention can interoperate. In accordance with one form of the present invention, the content description framework is a description scheme (DS) for describing streams or aggregations of multimedia objects, which may comprise audio, images, video, text, time series, and various other modalities. This description scheme can accommodate an essentially limitless number of descriptors in terms of features, semantics or metadata, and facilitate content-based search, index, and retrieval, among other capabilities, for both streamed or aggregated multimedia objects.

  3. A framework for intelligent data acquisition and real-time database searching for shotgun proteomics.

    PubMed

    Graumann, Johannes; Scheltema, Richard A; Zhang, Yong; Cox, Jürgen; Mann, Matthias

    2012-03-01

    In the analysis of complex peptide mixtures by MS-based proteomics, many more peptides elute at any given time than can be identified and quantified by the mass spectrometer. This makes it desirable to optimally allocate peptide sequencing and narrow mass range quantification events. In computer science, intelligent agents are frequently used to make autonomous decisions in complex environments. Here we develop and describe a framework for intelligent data acquisition and real-time database searching and showcase selected examples. The intelligent agent is implemented in the MaxQuant computational proteomics environment, termed MaxQuant Real-Time. It analyzes data as it is acquired on the mass spectrometer, constructs isotope patterns and SILAC pair information as well as controls MS and tandem MS events based on real-time and prior MS data or external knowledge. Re-implementing a top10 method in the intelligent agent yields similar performance to the data dependent methods running on the mass spectrometer itself. We demonstrate the capabilities of MaxQuant Real-Time by creating a real-time search engine capable of identifying peptides "on-the-fly" within 30 ms, well within the time constraints of a shotgun fragmentation "topN" method. The agent can focus sequencing events onto peptides of specific interest, such as those originating from a specific gene ontology (GO) term, or peptides that are likely modified versions of already identified peptides. Finally, we demonstrate enhanced quantification of SILAC pairs whose ratios were poorly defined in survey spectra. MaxQuant Real-Time is flexible and can be applied to a large number of scenarios that would benefit from intelligent, directed data acquisition. Our framework should be especially useful for new instrument types, such as the quadrupole-Orbitrap, that are currently becoming available.

  4. A Framework for Intelligent Data Acquisition and Real-Time Database Searching for Shotgun Proteomics*

    PubMed Central

    Graumann, Johannes; Scheltema, Richard A.; Zhang, Yong; Cox, Jürgen; Mann, Matthias

    2012-01-01

    In the analysis of complex peptide mixtures by MS-based proteomics, many more peptides elute at any given time than can be identified and quantified by the mass spectrometer. This makes it desirable to optimally allocate peptide sequencing and narrow mass range quantification events. In computer science, intelligent agents are frequently used to make autonomous decisions in complex environments. Here we develop and describe a framework for intelligent data acquisition and real-time database searching and showcase selected examples. The intelligent agent is implemented in the MaxQuant computational proteomics environment, termed MaxQuant Real-Time. It analyzes data as it is acquired on the mass spectrometer, constructs isotope patterns and SILAC pair information as well as controls MS and tandem MS events based on real-time and prior MS data or external knowledge. Re-implementing a top10 method in the intelligent agent yields similar performance to the data dependent methods running on the mass spectrometer itself. We demonstrate the capabilities of MaxQuant Real-Time by creating a real-time search engine capable of identifying peptides “on-the-fly” within 30 ms, well within the time constraints of a shotgun fragmentation “topN” method. The agent can focus sequencing events onto peptides of specific interest, such as those originating from a specific gene ontology (GO) term, or peptides that are likely modified versions of already identified peptides. Finally, we demonstrate enhanced quantification of SILAC pairs whose ratios were poorly defined in survey spectra. MaxQuant Real-Time is flexible and can be applied to a large number of scenarios that would benefit from intelligent, directed data acquisition. Our framework should be especially useful for new instrument types, such as the quadrupole-Orbitrap, that are currently becoming available. PMID:22171319

  5. Garbage in, Garbage Out: Data Collection, Quality Assessment and Reporting Standards for Social Media Data Use in Health Research, Infodemiology and Digital Disease Detection.

    PubMed

    Kim, Yoonsang; Huang, Jidong; Emery, Sherry

    2016-02-26

    Social media have transformed the communications landscape. People increasingly obtain news and health information online and via social media. Social media platforms also serve as novel sources of rich observational data for health research (including infodemiology, infoveillance, and digital disease detection detection). While the number of studies using social data is growing rapidly, very few of these studies transparently outline their methods for collecting, filtering, and reporting those data. Keywords and search filters applied to social data form the lens through which researchers may observe what and how people communicate about a given topic. Without a properly focused lens, research conclusions may be biased or misleading. Standards of reporting data sources and quality are needed so that data scientists and consumers of social media research can evaluate and compare methods and findings across studies. We aimed to develop and apply a framework of social media data collection and quality assessment and to propose a reporting standard, which researchers and reviewers may use to evaluate and compare the quality of social data across studies. We propose a conceptual framework consisting of three major steps in collecting social media data: develop, apply, and validate search filters. This framework is based on two criteria: retrieval precision (how much of retrieved data is relevant) and retrieval recall (how much of the relevant data is retrieved). We then discuss two conditions that estimation of retrieval precision and recall rely on--accurate human coding and full data collection--and how to calculate these statistics in cases that deviate from the two ideal conditions. We then apply the framework on a real-world example using approximately 4 million tobacco-related tweets collected from the Twitter firehose. We developed and applied a search filter to retrieve e-cigarette-related tweets from the archive based on three keyword categories: devices, brands, and behavior. The search filter retrieved 82,205 e-cigarette-related tweets from the archive and was validated. Retrieval precision was calculated above 95% in all cases. Retrieval recall was 86% assuming ideal conditions (no human coding errors and full data collection), 75% when unretrieved messages could not be archived, 86% assuming no false negative errors by coders, and 93% allowing both false negative and false positive errors by human coders. This paper sets forth a conceptual framework for the filtering and quality evaluation of social data that addresses several common challenges and moves toward establishing a standard of reporting social data. Researchers should clearly delineate data sources, how data were accessed and collected, and the search filter building process and how retrieval precision and recall were calculated. The proposed framework can be adapted to other public social media platforms.

  6. A new method to improve network topological similarity search: applied to fold recognition

    PubMed Central

    Lhota, John; Hauptman, Ruth; Hart, Thomas; Ng, Clara; Xie, Lei

    2015-01-01

    Motivation: Similarity search is the foundation of bioinformatics. It plays a key role in establishing structural, functional and evolutionary relationships between biological sequences. Although the power of the similarity search has increased steadily in recent years, a high percentage of sequences remain uncharacterized in the protein universe. Thus, new similarity search strategies are needed to efficiently and reliably infer the structure and function of new sequences. The existing paradigm for studying protein sequence, structure, function and evolution has been established based on the assumption that the protein universe is discrete and hierarchical. Cumulative evidence suggests that the protein universe is continuous. As a result, conventional sequence homology search methods may be not able to detect novel structural, functional and evolutionary relationships between proteins from weak and noisy sequence signals. To overcome the limitations in existing similarity search methods, we propose a new algorithmic framework—Enrichment of Network Topological Similarity (ENTS)—to improve the performance of large scale similarity searches in bioinformatics. Results: We apply ENTS to a challenging unsolved problem: protein fold recognition. Our rigorous benchmark studies demonstrate that ENTS considerably outperforms state-of-the-art methods. As the concept of ENTS can be applied to any similarity metric, it may provide a general framework for similarity search on any set of biological entities, given their representation as a network. Availability and implementation: Source code freely available upon request Contact: lxie@iscb.org PMID:25717198

  7. Building clinical networks: a developmental evaluation framework.

    PubMed

    Carswell, Peter; Manning, Benjamin; Long, Janet; Braithwaite, Jeffrey

    2014-05-01

    Clinical networks have been designed as a cross-organisational mechanism to plan and deliver health services. With recent concerns about the effectiveness of these structures, it is timely to consider an evidence-informed approach for how they can be developed and evaluated. To document an evaluation framework for clinical networks by drawing on the network evaluation literature and a 5-year study of clinical networks. We searched literature in three domains: network evaluation, factors that aid or inhibit network development, and on robust methods to measure network characteristics. This material was used to build a framework required for effective developmental evaluation. The framework's architecture identifies three stages of clinical network development; partner selection, network design and network management. Within each stage is evidence about factors that act as facilitators and barriers to network growth. These factors can be used to measure progress via appropriate methods and tools. The framework can provide for network growth and support informed decisions about progress. For the first time in one place a framework incorporating rigorous methods and tools can identify factors known to affect the development of clinical networks. The target user group is internal stakeholders who need to conduct developmental evaluation to inform key decisions along their network's developmental pathway.

  8. An overview of ethical frameworks in public health: can they be supportive in the evaluation of programs to prevent overweight?

    PubMed Central

    2010-01-01

    Background The prevention of overweight sometimes raises complex ethical questions. Ethical public health frameworks may be helpful in evaluating programs or policy for overweight prevention. We give an overview of the purpose, form and contents of such public health frameworks and investigate to which extent they are useful for evaluating programs to prevent overweight and/or obesity. Methods Our search for frameworks consisted of three steps. Firstly, we asked experts in the field of ethics and public health for the frameworks they were aware of. Secondly, we performed a search in Pubmed. Thirdly, we checked literature references in the articles on frameworks we found. In total, we thus found six ethical frameworks. We assessed the area on which the available ethical frameworks focus, the users they target at, the type of policy or intervention they propose to address, and their aim. Further, we looked at their structure and content, that is, tools for guiding the analytic process, the main ethical principles or values, possible criteria for dealing with ethical conflicts, and the concrete policy issues they are applied to. Results All frameworks aim to support public health professionals or policymakers. Most of them provide a set of values or principles that serve as a standard for evaluating policy. Most frameworks articulate both the positive ethical foundations for public health and ethical constraints or concerns. Some frameworks offer analytic tools for guiding the evaluative process. Procedural guidelines and concrete criteria for solving important ethical conflicts in the particular area of the prevention of overweight or obesity are mostly lacking. Conclusions Public health ethical frameworks may be supportive in the evaluation of overweight prevention programs or policy, but seem to lack practical guidance to address ethical conflicts in this particular area. PMID:20969761

  9. Localization Versus Abstraction: A Comparison of Two Search Reduction Techniques

    NASA Technical Reports Server (NTRS)

    Lansky, Amy L.

    1992-01-01

    There has been much recent work on the use of abstraction to improve planning behavior and cost. Another technique for dealing with the inherently explosive cost of planning is localization. This paper compares the relative strengths of localization and abstraction in reducing planning search cost. In particular, localization is shown to subsume abstraction. Localization techniques can model the various methods of abstraction that have been used, but also provide a much more flexible framework, with a broader range of benefits.

  10. Large-scale Cross-modality Search via Collective Matrix Factorization Hashing.

    PubMed

    Ding, Guiguang; Guo, Yuchen; Zhou, Jile; Gao, Yue

    2016-09-08

    By transforming data into binary representation, i.e., Hashing, we can perform high-speed search with low storage cost, and thus Hashing has collected increasing research interest in the recent years. Recently, how to generate Hashcode for multimodal data (e.g., images with textual tags, documents with photos, etc) for large-scale cross-modality search (e.g., searching semantically related images in database for a document query) is an important research issue because of the fast growth of multimodal data in the Web. To address this issue, a novel framework for multimodal Hashing is proposed, termed as Collective Matrix Factorization Hashing (CMFH). The key idea of CMFH is to learn unified Hashcodes for different modalities of one multimodal instance in the shared latent semantic space in which different modalities can be effectively connected. Therefore, accurate cross-modality search is supported. Based on the general framework, we extend it in the unsupervised scenario where it tries to preserve the Euclidean structure, and in the supervised scenario where it fully exploits the label information of data. The corresponding theoretical analysis and the optimization algorithms are given. We conducted comprehensive experiments on three benchmark datasets for cross-modality search. The experimental results demonstrate that CMFH can significantly outperform several state-of-the-art cross-modality Hashing methods, which validates the effectiveness of the proposed CMFH.

  11. Inferring Mechanisms of Compensation from E-MAP and SGA Data Using Local Search Algorithms for Max Cut

    PubMed Central

    Leiserson, Mark D.M.; Tatar, Diana; Cowen, Lenore J.

    2011-01-01

    Abstract A new method based on a mathematically natural local search framework for max cut is developed to uncover functionally coherent module and BPM motifs in high-throughput genetic interaction data. Unlike previous methods, which also consider physical protein-protein interaction data, our method utilizes genetic interaction data only; this becomes increasingly important as high-throughput genetic interaction data is becoming available in settings where less is known about physical interaction data. We compare modules and BPMs obtained to previous methods and across different datasets. Despite needing no physical interaction information, the BPMs produced by our method are competitive with previous methods. Biological findings include a suggested global role for the prefoldin complex and a SWR subcomplex in pathway buffering in the budding yeast interactome. PMID:21882903

  12. Characterising dark matter searches at colliders and direct detection experiments: Vector mediators

    DOE PAGES

    Buchmueller, Oliver; Dolan, Matthew J.; Malik, Sarah A.; ...

    2015-01-09

    We introduce a Minimal Simplified Dark Matter (MSDM) framework to quantitatively characterise dark matter (DM) searches at the LHC. We study two MSDM models where the DM is a Dirac fermion which interacts with a vector and axial-vector mediator. The models are characterised by four parameters: m DM, M med , g DM and g q, the DM and mediator masses, and the mediator couplings to DM and quarks respectively. The MSDM models accurately capture the full event kinematics, and the dependence on all masses and couplings can be systematically studied. The interpretation of mono-jet searches in this framework canmore » be used to establish an equal-footing comparison with direct detection experiments. For theories with a vector mediator, LHC mono-jet searches possess better sensitivity than direct detection searches for light DM masses (≲5 GeV). For axial-vector mediators, LHC and direct detection searches generally probe orthogonal directions in the parameter space. We explore the projected limits of these searches from the ultimate reach of the LHC and multi-ton xenon direct detection experiments, and find that the complementarity of the searches remains. In conclusion, we provide a comparison of limits in the MSDM and effective field theory (EFT) frameworks to highlight the deficiencies of the EFT framework, particularly when exploring the complementarity of mono-jet and direct detection searches.« less

  13. Student Leadership Development: A Functional Framework

    ERIC Educational Resources Information Center

    Hine, Gregory Stephen Colin

    2014-01-01

    This article presents a longitudinal, qualitative case study of a student leadership program in a Catholic secondary school in Perth, Western Australia. Data were collected over a period of three years through multiple methods, including one-on-one interviewing, focus group interviewing, document searches, field notes, and researcher reflective…

  14. A pluggable framework for parallel pairwise sequence search.

    PubMed

    Archuleta, Jeremy; Feng, Wu-chun; Tilevich, Eli

    2007-01-01

    The current and near future of the computing industry is one of multi-core and multi-processor technology. Most existing sequence-search tools have been designed with a focus on single-core, single-processor systems. This discrepancy between software design and hardware architecture substantially hinders sequence-search performance by not allowing full utilization of the hardware. This paper presents a novel framework that will aid the conversion of serial sequence-search tools into a parallel version that can take full advantage of the available hardware. The framework, which is based on a software architecture called mixin layers with refined roles, enables modules to be plugged into the framework with minimal effort. The inherent modular design improves maintenance and extensibility, thus opening up a plethora of opportunities for advanced algorithmic features to be developed and incorporated while routine maintenance of the codebase persists.

  15. A flexible motif search technique based on generalized profiles.

    PubMed

    Bucher, P; Karplus, K; Moeri, N; Hofmann, K

    1996-03-01

    A flexible motif search technique is presented which has two major components: (1) a generalized profile syntax serving as a motif definition language; and (2) a motif search method specifically adapted to the problem of finding multiple instances of a motif in the same sequence. The new profile structure, which is the core of the generalized profile syntax, combines the functions of a variety of motif descriptors implemented in other methods, including regular expression-like patterns, weight matrices, previously used profiles, and certain types of hidden Markov models (HMMs). The relationship between generalized profiles and other biomolecular motif descriptors is analyzed in detail, with special attention to HMMs. Generalized profiles are shown to be equivalent to a particular class of HMMs, and conversion procedures in both directions are given. The conversion procedures provide an interpretation for local alignment in the framework of stochastic models, allowing for clear, simple significance tests. A mathematical statement of the motif search problem defines the new method exactly without linking it to a specific algorithmic solution. Part of the definition includes a new definition of disjointness of alignments.

  16. Improve Biomedical Information Retrieval using Modified Learning to Rank Methods.

    PubMed

    Xu, Bo; Lin, Hongfei; Lin, Yuan; Ma, Yunlong; Yang, Liang; Wang, Jian; Yang, Zhihao

    2016-06-14

    In these years, the number of biomedical articles has increased exponentially, which becomes a problem for biologists to capture all the needed information manually. Information retrieval technologies, as the core of search engines, can deal with the problem automatically, providing users with the needed information. However, it is a great challenge to apply these technologies directly for biomedical retrieval, because of the abundance of domain specific terminologies. To enhance biomedical retrieval, we propose a novel framework based on learning to rank. Learning to rank is a series of state-of-the-art information retrieval techniques, and has been proved effective in many information retrieval tasks. In the proposed framework, we attempt to tackle the problem of the abundance of terminologies by constructing ranking models, which focus on not only retrieving the most relevant documents, but also diversifying the searching results to increase the completeness of the resulting list for a given query. In the model training, we propose two novel document labeling strategies, and combine several traditional retrieval models as learning features. Besides, we also investigate the usefulness of different learning to rank approaches in our framework. Experimental results on TREC Genomics datasets demonstrate the effectiveness of our framework for biomedical information retrieval.

  17. Dementia skills for all: a core competency framework for the workforce in the United Kingdom.

    PubMed

    Tsaroucha, Anna; Benbow, Susan Mary; Kingston, Paul; Le Mesurier, Nick

    2013-01-01

    One of the biggest challenges facing health and social care in the United Kingdom is the projected increase in the number of older people who require dementia care. The National Dementia Strategy (Department of Health, 2009) emphasizes the critical need for a skilled workforce in all aspects of dementia care. In the West Midlands, the Strategic Health Authority commissioned a project to develop a set of generic core competencies that would guide a competency based curriculum to meet the demands for improved dementia training and education. A systematic literature search was conducted to identify relevant frameworks to assist with this work. The core competency framework produced and the methods used for the development of the framework are presented and discussed.

  18. An algebra-based method for inferring gene regulatory networks.

    PubMed

    Vera-Licona, Paola; Jarrah, Abdul; Garcia-Puente, Luis David; McGee, John; Laubenbacher, Reinhard

    2014-03-26

    The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the dynamic patterns present in the network. Boolean polynomial dynamical systems provide a powerful modeling framework for the reverse engineering of gene regulatory networks, that enables a rich mathematical structure on the model search space. A C++ implementation of the method, distributed under LPGL license, is available, together with the source code, at http://www.paola-vera-licona.net/Software/EARevEng/REACT.html.

  19. Use and misuse of mixed methods in population oral health research: A scoping review.

    PubMed

    Gupta, A; Keuskamp, D

    2018-05-30

    Despite the known benefits of a mixed methods approach in health research, little is known of its use in the field of population oral health. To map the extent of literature using a mixed methods approach to examine population oral health outcomes. For a comprehensive search of all the available literature published in the English language, databases including PubMed, Dentistry and Oral Sciences Source (DOSS), CINAHL, Web of Science and EMBASE (including Medline) were searched using a range of keywords from inception to October 2017. Only peer-reviewed, population-based studies of oral health outcomes conducted among non-institutionalised participants and using mixed methods were considered eligible for inclusion. Only nine studies met the inclusion criteria and were included in the review. The most frequent oral health outcome investigated was caries experience. However, most studies lacked a theoretical rationale or framework for using mixed methods, or supporting the use of qualitative data. Concurrent triangulation with a convergent design was the most commonly used mixed methods typology for integrating quantitative and qualitative data. The tools used to collect quantitative and qualitative data were mostly limited to surveys and interviews. With growing complexity recognised in the determinants of oral disease, future studies addressing population oral health outcomes are likely to benefit from the use of mixed methods. Explicit consideration of theoretical framework and methodology will strengthen those investigations. Copyright© 2018 Dennis Barber Ltd.

  20. Examination of the regulatory frameworks applicable to biologic drugs (including stem cells and their progeny) in Europe, the U.S., and Australia: part I--a method of manual documentary analysis.

    PubMed

    Ilic, Nina; Savic, Snezana; Siegel, Evan; Atkinson, Kerry; Tasic, Ljiljana

    2012-12-01

    Recent development of a wide range of regulatory standards applicable to production and use of tissues, cells, and other biologics (or biologicals), as advanced therapies, indicates considerable interest in the regulation of these products. The objective of this study was to analyze and compare high-tier documents within the Australian, European, and U.S. biologic drug regulatory environments using qualitative methodology. Cohort 1 of the selected 18 high-tier regulatory documents from the European Medicines Agency (EMA), the U.S. Food and Drug Administration (FDA), and the Therapeutic Goods Administration (TGA) regulatory frameworks were subject to a manual documentary analysis. These documents were consistent with the legal requirements for manufacturing and use of biologic drugs in humans and fall into six different categories. Manual analysis included a terminology search. The occurrence, frequency, and interchangeable use of different terms and phrases were recorded in the manual documentary analysis. Despite obvious differences, manual documentary analysis revealed certain consistency in use of terminology across analyzed frameworks. Phrase search frequencies have shown less uniformity than the search of terms. Overall, the EMA framework's documents referred to "medicinal products" and "marketing authorization(s)," the FDA documents discussed "drug(s)" or "biologic(s)," and the TGA documents referred to "biological(s)." Although high-tier documents often use different terminology they share concepts and themes. Documents originating from the same source have more conjunction in their terminology although they belong to different frameworks (i.e., Good Clinical Practice requirements based on the Declaration of Helsinki, 1964). Automated (software-based) documentary analysis should be obtained for the conceptual and relational analysis.

  1. Examination of the Regulatory Frameworks Applicable to Biologic Drugs (Including Stem Cells and Their Progeny) in Europe, the U.S., and Australia: Part I—A Method of Manual Documentary Analysis

    PubMed Central

    Savic, Snezana; Siegel, Evan; Atkinson, Kerry; Tasic, Ljiljana

    2012-01-01

    Recent development of a wide range of regulatory standards applicable to production and use of tissues, cells, and other biologics (or biologicals), as advanced therapies, indicates considerable interest in the regulation of these products. The objective of this study was to analyze and compare high-tier documents within the Australian, European, and U.S. biologic drug regulatory environments using qualitative methodology. Cohort 1 of the selected 18 high-tier regulatory documents from the European Medicines Agency (EMA), the U.S. Food and Drug Administration (FDA), and the Therapeutic Goods Administration (TGA) regulatory frameworks were subject to a manual documentary analysis. These documents were consistent with the legal requirements for manufacturing and use of biologic drugs in humans and fall into six different categories. Manual analysis included a terminology search. The occurrence, frequency, and interchangeable use of different terms and phrases were recorded in the manual documentary analysis. Despite obvious differences, manual documentary analysis revealed certain consistency in use of terminology across analyzed frameworks. Phrase search frequencies have shown less uniformity than the search of terms. Overall, the EMA framework's documents referred to “medicinal products” and “marketing authorization(s),” the FDA documents discussed “drug(s)” or “biologic(s),” and the TGA documents referred to “biological(s).” Although high-tier documents often use different terminology they share concepts and themes. Documents originating from the same source have more conjunction in their terminology although they belong to different frameworks (i.e., Good Clinical Practice requirements based on the Declaration of Helsinki, 1964). Automated (software-based) documentary analysis should be obtained for the conceptual and relational analysis. PMID:23283551

  2. Hybridization of decomposition and local search for multiobjective optimization.

    PubMed

    Ke, Liangjun; Zhang, Qingfu; Battiti, Roberto

    2014-10-01

    Combining ideas from evolutionary algorithms, decomposition approaches, and Pareto local search, this paper suggests a simple yet efficient memetic algorithm for combinatorial multiobjective optimization problems: memetic algorithm based on decomposition (MOMAD). It decomposes a combinatorial multiobjective problem into a number of single objective optimization problems using an aggregation method. MOMAD evolves three populations: 1) population P(L) for recording the current solution to each subproblem; 2) population P(P) for storing starting solutions for Pareto local search; and 3) an external population P(E) for maintaining all the nondominated solutions found so far during the search. A problem-specific single objective heuristic can be applied to these subproblems to initialize the three populations. At each generation, a Pareto local search method is first applied to search a neighborhood of each solution in P(P) to update P(L) and P(E). Then a single objective local search is applied to each perturbed solution in P(L) for improving P(L) and P(E), and reinitializing P(P). The procedure is repeated until a stopping condition is met. MOMAD provides a generic hybrid multiobjective algorithmic framework in which problem specific knowledge, well developed single objective local search and heuristics and Pareto local search methods can be hybridized. It is a population based iterative method and thus an anytime algorithm. Extensive experiments have been conducted in this paper to study MOMAD and compare it with some other state-of-the-art algorithms on the multiobjective traveling salesman problem and the multiobjective knapsack problem. The experimental results show that our proposed algorithm outperforms or performs similarly to the best so far heuristics on these two problems.

  3. Sensitivity and Predictive Value of 15 PubMed Search Strategies to Answer Clinical Questions Rated Against Full Systematic Reviews

    PubMed Central

    Merglen, Arnaud; Courvoisier, Delphine S; Combescure, Christophe; Garin, Nicolas; Perrier, Arnaud; Perneger, Thomas V

    2012-01-01

    Background Clinicians perform searches in PubMed daily, but retrieving relevant studies is challenging due to the rapid expansion of medical knowledge. Little is known about the performance of search strategies when they are applied to answer specific clinical questions. Objective To compare the performance of 15 PubMed search strategies in retrieving relevant clinical trials on therapeutic interventions. Methods We used Cochrane systematic reviews to identify relevant trials for 30 clinical questions. Search terms were extracted from the abstract using a predefined procedure based on the population, interventions, comparison, outcomes (PICO) framework and combined into queries. We tested 15 search strategies that varied in their query (PIC or PICO), use of PubMed’s Clinical Queries therapeutic filters (broad or narrow), search limits, and PubMed links to related articles. We assessed sensitivity (recall) and positive predictive value (precision) of each strategy on the first 2 PubMed pages (40 articles) and on the complete search output. Results The performance of the search strategies varied widely according to the clinical question. Unfiltered searches and those using the broad filter of Clinical Queries produced large outputs and retrieved few relevant articles within the first 2 pages, resulting in a median sensitivity of only 10%–25%. In contrast, all searches using the narrow filter performed significantly better, with a median sensitivity of about 50% (all P < .001 compared with unfiltered queries) and positive predictive values of 20%–30% (P < .001 compared with unfiltered queries). This benefit was consistent for most clinical questions. Searches based on related articles retrieved about a third of the relevant studies. Conclusions The Clinical Queries narrow filter, along with well-formulated queries based on the PICO framework, provided the greatest aid in retrieving relevant clinical trials within the 2 first PubMed pages. These results can help clinicians apply effective strategies to answer their questions at the point of care. PMID:22693047

  4. The art of successful implementation of psychosocial interventions in residential dementia care: a systematic review of the literature based on the RE-AIM framework.

    PubMed

    Boersma, Petra; van Weert, Julia C M; Lakerveld, Jeroen; Dröes, Rose-Marie

    2015-01-01

    In the past decades many psychosocial interventions for elderly people with dementia have been developed and implemented. Relatively little research has been done on the extent to which these interventions were implemented in the daily care. The aim of this study was to obtain insight into strategies for successful implementation of psychosocial interventions in the daily residential dementia care. Using a modified RE-AIM framework, the indicators that are considered important for effective and sustainable implementation were defined. A systematic literature search was undertaken in PubMed, PsycINFO, and Cinahl, followed by a hand search for key papers. The included publications were mapped based on the dimensions of the RE-AIM framework: Reach, Effectiveness, Adoption, Implementation, and Maintenance. Fifty-four papers met the inclusion criteria and described various psychosocial interventions. A distinction was made between studies that used one and studies that used multiple implementation strategies. This review shows that to improve their knowledge, caregivers needed at least multiple implementation strategies, only education is not enough. For increasing a more person-centered attitude, different types of knowledge transfer can be effective. Little consideration is given to the adoption of the method by caregivers and to the long-term sustainability (maintenance). This review shows that in order to successfully implement a psychosocial method the use of multiple implementation strategies is recommended. To ensure sustainability of a psychosocial care method in daily nursing home care, innovators as well as researchers should specifically pay attention to the dimensions Adoption, Implementation, and Maintenance of the RE-AIM implementation framework.

  5. eTACTS: A Method for Dynamically Filtering Clinical Trial Search Results

    PubMed Central

    Miotto, Riccardo; Jiang, Silis; Weng, Chunhua

    2013-01-01

    Objective Information overload is a significant problem facing online clinical trial searchers. We present eTACTS, a novel interactive retrieval framework using common eligibility tags to dynamically filter clinical trial search results. Materials and Methods eTACTS mines frequent eligibility tags from free-text clinical trial eligibility criteria and uses these tags for trial indexing. After an initial search, eTACTS presents to the user a tag cloud representing the current results. When the user selects a tag, eTACTS retains only those trials containing that tag in their eligibility criteria and generates a new cloud based on tag frequency and co-occurrences in the remaining trials. The user can then select a new tag or unselect a previous tag. The process iterates until a manageable number of trials is returned. We evaluated eTACTS in terms of filtering efficiency, diversity of the search results, and user eligibility to the filtered trials using both qualitative and quantitative methods. Results eTACTS (1) rapidly reduced search results from over a thousand trials to ten; (2) highlighted trials that are generally not top-ranked by conventional search engines; and (3) retrieved a greater number of suitable trials than existing search engines. Discussion eTACTS enables intuitive clinical trial searches by indexing eligibility criteria with effective tags. User evaluation was limited to one case study and a small group of evaluators due to the long duration of the experiment. Although a larger-scale evaluation could be conducted, this feasibility study demonstrated significant advantages of eTACTS over existing clinical trial search engines. Conclusion A dynamic eligibility tag cloud can potentially enhance state-of-the-art clinical trial search engines by allowing intuitive and efficient filtering of the search result space. PMID:23916863

  6. Adolescent Risk Screening Instruments for Primary Care: An Integrative Review Utilizing the Donabedian Framework.

    PubMed

    Hiott, Deanna B; Phillips, Shannon; Amella, Elaine

    2017-07-31

    Adolescent risk-taking behavior choices can affect future health outcomes. The purpose of this integrative literature review is to evaluate adolescent risk screening instruments available to primary care providers in the United States using the Donabedian Framework of structure, process, and outcome. To examine the literature concerning multidimensional adolescent risk screening instruments available in the United States for use in the primary care setting, library searches, ancestry searches, and Internet searches were conducted. Library searches included a systematic search of the Cumulative Index to Nursing and Allied Health Literature (CINAHL), Academic Search Premier, Health Source Nursing Academic Ed, Medline, PsycINFO, the Psychology and Behavioral Sciences Collection, and PubMed databases with CINAHL headings using the following Boolean search terms: "primary care" and screening and pediatric. Criteria for inclusion consisted of studies conducted in the United States that involved broad multidimensional adolescent risk screening instruments for use in the pediatric primary care setting. Instruments that focused solely on one unhealthy behavior were excluded, as were developmental screens and screens not validated or designed for all ages of adolescents. In all 25 manuscripts reviewed, 16 screens met the inclusion criteria and were included in the study. These 16 screens were examined for factors associated with the Donabedian structure-process-outcome model. This review revealed that many screens contain structural issues related to cost and length that inhibit provider implementation in the primary care setting. Process limitations regarding the report method and administration format were also identified. The Pediatric Symptom Checklist was identified as a free, short tool that is valid and reliable.

  7. The neurosciences and the search for a unified psychology: the science and esthetics of a single framework

    PubMed Central

    Stam, Henderikus J.

    2015-01-01

    The search for a so-called unified or integrated theory has long served as a goal for some psychologists, even if the search is often implicit. But if the established sciences do not have an explicitly unified set of theories, then why should psychology? After examining this question again I argue that psychology is in fact reasonably unified around its methods and its commitment to functional explanations, an indeterminate functionalism. The question of the place of the neurosciences in this framework is complex. On the one hand, the neuroscientific project will not likely renew and synthesize the disparate arms of psychology. On the other hand, their reformulation of what it means to be human will exert an influence in multiple ways. One way to capture that influence is to conceptualize the brain in terms of a technology that we interact with in a manner that we do not yet fully understand. In this way we maintain both a distance from neuro-reductionism and refrain from committing to an unfettered subjectivity. PMID:26500571

  8. A framework and methodology for navigating disaster and global health in crisis literature.

    PubMed

    Chan, Jennifer L; Burkle, Frederick M

    2013-04-04

    Both 'disasters' and 'global health in crisis' research has dramatically grown due to the ever-increasing frequency and magnitude of crises around the world. Large volumes of peer-reviewed literature are not only a testament to the field's value and evolution, but also present an unprecedented outpouring of seemingly unmanageable information across a wide array of crises and disciplines. Disaster medicine, health and humanitarian assistance, global health and public health disaster literature all lie within the disaster and global health in crisis literature spectrum and are increasingly accepted as multidisciplinary and transdisciplinary disciplines. Researchers, policy makers, and practitioners now face a new challenge; that of accessing this expansive literature for decision-making and exploring new areas of research. Individuals are also reaching beyond the peer-reviewed environment to grey literature using search engines like Google Scholar to access policy documents, consensus reports and conference proceedings. What is needed is a method and mechanism with which to search and retrieve relevant articles from this expansive body of literature. This manuscript presents both a framework and workable process for a diverse group of users to navigate the growing peer-reviewed and grey disaster and global health in crises literature. Disaster terms from textbooks, peer-reviewed and grey literature were used to design a framework of thematic clusters and subject matter 'nodes'. A set of 84 terms, selected from 143 curated terms was organized within each node reflecting topics within the disaster and global health in crisis literature. Terms were crossed with one another and the term 'disaster'. The results were formatted into tables and matrices. This process created a roadmap of search terms that could be applied to the PubMed database. Each search in the matrix or table results in a listed number of articles. This process was applied to literature from PubMed from 2005-2011. A complementary process was also applied to Google Scholar using the same framework of clusters, nodes, and terms expanding the search process to include the broader grey literature assets. A framework of four thematic clusters and twelve subject matter nodes were designed to capture diverse disaster and global health in crisis-related content. From 2005-2011 there were 18,660 articles referring to the term [disaster]. Restricting the search to human research, MeSH, and English language there remained 7,736 identified articles representing an unmanageable number to adequately process for research, policy or best practices. However, using the crossed search and matrix process revealed further examples of robust realms of research in disasters, emergency medicine, EMS, public health and global health. Examples of potential gaps in current peer-reviewed disaster and global health in crisis literature were identified as mental health, elderly care, and alternate sites of care. The same framework and process was then applied to Google Scholar, specifically for topics that resulted in few PubMed search returns. When applying the same framework and process to the Google Scholar example searches retrieved unique peer-reviewed articles not identified in PubMed and documents including books, governmental documents and consensus papers. The proposed framework, methodology and process using four clusters, twelve nodes and a matrix and table process applied to PubMed and Google Scholar unlocks otherwise inaccessible opportunities to better navigate the massively growing body of peer-reviewed disaster and global health in crises literature. This approach will assist researchers, policy makers, and practitioners to generate future research questions, report on the overall evolution of the disaster and global health in crisis field and further guide disaster planning, prevention, preparedness, mitigation response and recovery.

  9. Exploring personalized searches using tag-based user profiles and resource profiles in folksonomy.

    PubMed

    Cai, Yi; Li, Qing; Xie, Haoran; Min, Huaqin

    2014-10-01

    With the increase in resource-sharing websites such as YouTube and Flickr, many shared resources have arisen on the Web. Personalized searches have become more important and challenging since users demand higher retrieval quality. To achieve this goal, personalized searches need to take users' personalized profiles and information needs into consideration. Collaborative tagging (also known as folksonomy) systems allow users to annotate resources with their own tags, which provides a simple but powerful way for organizing, retrieving and sharing different types of social resources. In this article, we examine the limitations of previous tag-based personalized searches. To handle these limitations, we propose a new method to model user profiles and resource profiles in collaborative tagging systems. We use a normalized term frequency to indicate the preference degree of a user on a tag. A novel search method using such profiles of users and resources is proposed to facilitate the desired personalization in resource searches. In our framework, instead of the keyword matching or similarity measurement used in previous works, the relevance measurement between a resource and a user query (termed the query relevance) is treated as a fuzzy satisfaction problem of a user's query requirements. We implement a prototype system called the Folksonomy-based Multimedia Retrieval System (FMRS). Experiments using the FMRS data set and the MovieLens data set show that our proposed method outperforms baseline methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Combinatorial Fusion Analysis for Meta Search Information Retrieval

    NASA Astrophysics Data System (ADS)

    Hsu, D. Frank; Taksa, Isak

    Leading commercial search engines are built as single event systems. In response to a particular search query, the search engine returns a single list of ranked search results. To find more relevant results the user must frequently try several other search engines. A meta search engine was developed to enhance the process of multi-engine querying. The meta search engine queries several engines at the same time and fuses individual engine results into a single search results list. The fusion of multiple search results has been shown (mostly experimentally) to be highly effective. However, the question of why and how the fusion should be done still remains largely unanswered. In this chapter, we utilize the combinatorial fusion analysis proposed by Hsu et al. to analyze combination and fusion of multiple sources of information. A rank/score function is used in the design and analysis of our framework. The framework provides a better understanding of the fusion phenomenon in information retrieval. For example, to improve the performance of the combined multiple scoring systems, it is necessary that each of the individual scoring systems has relatively high performance and the individual scoring systems are diverse. Additionally, we illustrate various applications of the framework using two examples from the information retrieval domain.

  11. Total Quality: An Understanding and Application For Community, Junior, and Technical Colleges.

    ERIC Educational Resources Information Center

    Burgdorf, Augustus

    1992-01-01

    Total Quality (TQ), is a customer-oriented philosophy of management that utilizes total employee involvement in the relentless, daily search for improvement of product and service quality, through the use of statistical methods, employee teams, and performance management. In the TQ framework, "internal" customers are individuals within the…

  12. In Search of a Method to Assess Dispositional Behaviours: The Case of Otago Virtual Hospital

    ERIC Educational Resources Information Center

    Loke, Swee-Kin; Blyth, Phil; Swan, Judith

    2012-01-01

    While the potentials of virtual worlds to support experiential learning in medical education are well documented, assessment of student learning within these environments is relatively scarce and often incongruent. In this article, a conceptual framework is proposed for formatively assessing dispositional behaviours in scenario-based learning…

  13. Cognitive-Behavioral Psychotherapy for Anxiety and Depressive Disorders in Children and Adolescents: An Evidence-Based Medicine Review

    ERIC Educational Resources Information Center

    Compton, Scott N.; March, John S.; Brent, David; Albano, Anne Marie; Weersing, V. Robin; Curry, John

    2004-01-01

    Objective: To review the literature on the cognitive-behavioral treatment of children and adolescents with anxiety and depressive disorders within the conceptual framework of evidence-based medicine. Method: The psychiatric and psychological literature was systematically searched for controlled trials applying cognitive-behavioral treatment to…

  14. The 21st Century Writing Program: Collaboration for the Common Good

    ERIC Educational Resources Information Center

    Moberg, Eric

    2010-01-01

    The purpose of this report is to review the literature on theoretical frameworks, best practices, and conceptual models for the 21st century collegiate writing program. Methods include electronic database searches for recent and historical peer-reviewed scholarly literature on collegiate writing programs. The author analyzed over 65 sources from…

  15. Linkages Between Clinical Practices and Community Organizations for Prevention: A Literature Review and Environmental Scan

    PubMed Central

    Hinnant, Laurie W.; Kane, Heather; Horne, Joseph; McAleer, Kelly; Roussel, Amy

    2012-01-01

    Objectives. We conducted a literature review and environmental scan to develop a framework for interventions that utilize linkages between clinical practices and community organizations for the delivery of preventive services, and to identify and characterize these efforts. Methods. We searched 4 major health services and social science electronic databases and conducted an Internet search to identify examples of linkage interventions in the areas of tobacco cessation, obesity, nutrition, and physical activity. Results. We identified 49 interventions, of which 18 examples described their evaluation methods or reported any intervention outcomes. Few conducted evaluations that were rigorous enough to capture changes in intermediate or long-term health outcomes. Outcomes in these evaluations were primarily patient-focused and did not include organizational or linkage characteristics. Conclusions. An attractive option to increase the delivery of preventive services is to link primary care practices to community organizations; evidence is not yet conclusive, however, that such linkage interventions are effective. Findings provide recommendations to researchers and organizations that fund research, and call for a framework and metrics to study linkage interventions. PMID:22690974

  16. A procedure of multiple period searching in unequally spaced time-series with the Lomb-Scargle method

    NASA Technical Reports Server (NTRS)

    Van Dongen, H. P.; Olofsen, E.; VanHartevelt, J. H.; Kruyt, E. W.; Dinges, D. F. (Principal Investigator)

    1999-01-01

    Periodogram analysis of unequally spaced time-series, as part of many biological rhythm investigations, is complicated. The mathematical framework is scattered over the literature, and the interpretation of results is often debatable. In this paper, we show that the Lomb-Scargle method is the appropriate tool for periodogram analysis of unequally spaced data. A unique procedure of multiple period searching is derived, facilitating the assessment of the various rhythms that may be present in a time-series. All relevant mathematical and statistical aspects are considered in detail, and much attention is given to the correct interpretation of results. The use of the procedure is illustrated by examples, and problems that may be encountered are discussed. It is argued that, when following the procedure of multiple period searching, we can even benefit from the unequal spacing of a time-series in biological rhythm research.

  17. Data Warehouse Governance Programs in Healthcare Settings: A Literature Review and a Call to Action

    PubMed Central

    Elliott, Thomas E.; Holmes, John H.; Davidson, Arthur J.; La Chance, Pierre-Andre; Nelson, Andrew F.; Steiner, John F.

    2013-01-01

    Purpose: Given the extensive data stored in healthcare data warehouses, data warehouse governance policies are needed to ensure data integrity and privacy. This review examines the current state of the data warehouse governance literature as it applies to healthcare data warehouses, identifies knowledge gaps, provides recommendations, and suggests approaches for further research. Methods: A comprehensive literature search using five data bases, journal article title-search, and citation searches was conducted between 1997 and 2012. Data warehouse governance documents from two healthcare systems in the USA were also reviewed. A modified version of nine components from the Data Governance Institute Framework for data warehouse governance guided the qualitative analysis. Results: Fifteen articles were retrieved. Only three were related to healthcare settings, each of which addressed only one of the nine framework components. Of the remaining 12 articles, 10 addressed between one and seven framework components and the remainder addressed none. Each of the two data warehouse governance plans obtained from healthcare systems in the USA addressed a subset of the framework components, and between them they covered all nine. Conclusions: While published data warehouse governance policies are rare, the 15 articles and two healthcare organizational documents reviewed in this study may provide guidance to creating such policies. Additional research is needed in this area to ensure that data warehouse governance polices are feasible and effective. The gap between the development of data warehouses in healthcare settings and formal governance policies is substantial, as evidenced by the sparse literature in this domain. PMID:25848561

  18. Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates

    NASA Astrophysics Data System (ADS)

    Ashton, G.; Prix, R.

    2018-05-01

    Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.

  19. Visual Exploratory Search of Relationship Graphs on Smartphones

    PubMed Central

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones’ user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users’ capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  20. Policy guidance on threats to legislative interventions in public health: a realist synthesis.

    PubMed

    Wong, Geoff; Pawson, Ray; Owen, Lesley

    2011-04-10

    Legislation is one of the most powerful weapons for improving population health and is often used by policy and decision makers. Little research exists to guide them as to whether legislation is feasible and/or will succeed. We aimed to produce a coherent and transferable evidence based framework of threats to legislative interventions to assist the decision making process and to test this through the 'case study' of legislation to ban smoking in cars carrying children. We conceptualised legislative interventions as a complex social interventions and so used the realist synthesis method to systematically review the literature for evidence. 99 articles were found through searches on five electronic databases (MEDLINE, HMIC, EMBASE, PsychINFO, Social Policy and Practice) and iterative purposive searching. Our initial searches sought any studies that contained information on smoking in vehicles carrying children. Throughout the review we continued where needed to search for additional studies of any type that would conceptually contribute to helping build and/or test our framework. Our framework identified a series of transferable threats to public health legislation. When applied to smoking bans in vehicles; problem misidentification; public support; opposition; and enforcement issues were particularly prominent threats. Our framework enabled us to understand and explain the nature of each threat and to infer the most likely outcome if such legislation were to be proposed in a jurisdiction where no such ban existed. Specifically, the micro-environment of a vehicle can contain highly hazardous levels of second hand smoke. Public support for such legislation is high amongst smokers and non-smokers and their underlying motivations were very similar - wanting to practice the Millian principle of protecting children from harm. Evidence indicated that the tobacco industry was not likely to oppose legislation and arguments that such a law would be 'unenforceable' were unfounded. It is possible to develop a coherent and transferable evidence based framework of the ideas and assumptions behind the threats to legislative intervention that may assist policy and decision makers to analyse and judge if legislation is feasible and/or likely to succeed.

  1. A FRAMEWORK FOR INTERPRETING FAST RADIO TRANSIENTS SEARCH EXPERIMENTS: APPLICATION TO THE V-FASTR EXPERIMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trott, Cathryn M.; Tingay, Steven J.; Wayth, Randall B.

    2013-04-10

    We define a framework for determining constraints on the detection rate of fast transient events from a population of underlying sources, with a view to incorporate beam shape, frequency effects, scattering effects, and detection efficiency into the metric. We then demonstrate a method for combining independent data sets into a single event rate constraint diagram, using a probabilistic approach to the limits on parameter space. We apply this new framework to present the latest results from the V-FASTR experiment, a commensal fast transients search using the Very Long Baseline Array (VLBA). In the 20 cm band, V-FASTR now has themore » ability to probe the regions of parameter space of importance for the observed Lorimer and Keane fast radio transient candidates by combining the information from observations with differing bandwidths, and properly accounting for the source dispersion measure, VLBA antenna beam shape, experiment time sampling, and stochastic nature of events. We then apply the framework to combine the results of the V-FASTR and Allen Telescope Array Fly's Eye experiments, demonstrating their complementarity. Expectations for fast transients experiments for the SKA Phase I dish array are then computed, and the impact of large differential bandwidths is discussed.« less

  2. A Novel Method Using Abstract Convex Underestimation in Ab-Initio Protein Structure Prediction for Guiding Search in Conformational Feature Space.

    PubMed

    Hao, Xiao-Hu; Zhang, Gui-Jun; Zhou, Xiao-Gen; Yu, Xu-Feng

    2016-01-01

    To address the searching problem of protein conformational space in ab-initio protein structure prediction, a novel method using abstract convex underestimation (ACUE) based on the framework of evolutionary algorithm was proposed. Computing such conformations, essential to associate structural and functional information with gene sequences, is challenging due to the high-dimensionality and rugged energy surface of the protein conformational space. As a consequence, the dimension of protein conformational space should be reduced to a proper level. In this paper, the high-dimensionality original conformational space was converted into feature space whose dimension is considerably reduced by feature extraction technique. And, the underestimate space could be constructed according to abstract convex theory. Thus, the entropy effect caused by searching in the high-dimensionality conformational space could be avoided through such conversion. The tight lower bound estimate information was obtained to guide the searching direction, and the invalid searching area in which the global optimal solution is not located could be eliminated in advance. Moreover, instead of expensively calculating the energy of conformations in the original conformational space, the estimate value is employed to judge if the conformation is worth exploring to reduce the evaluation time, thereby making computational cost lower and the searching process more efficient. Additionally, fragment assembly and the Monte Carlo method are combined to generate a series of metastable conformations by sampling in the conformational space. The proposed method provides a novel technique to solve the searching problem of protein conformational space. Twenty small-to-medium structurally diverse proteins were tested, and the proposed ACUE method was compared with It Fix, HEA, Rosetta and the developed method LEDE without underestimate information. Test results show that the ACUE method can more rapidly and more efficiently obtain the near-native protein structure.

  3. People, plants and health: a conceptual framework for assessing changes in medicinal plant consumption

    PubMed Central

    2012-01-01

    Background A large number of people in both developing and developed countries rely on medicinal plant products to maintain their health or treat illnesses. Available evidence suggests that medicinal plant consumption will remain stable or increase in the short to medium term. Knowledge on what factors determine medicinal plant consumption is, however, scattered across many disciplines, impeding, for example, systematic consideration of plant-based traditional medicine in national health care systems. The aim of the paper is to develop a conceptual framework for understanding medicinal plant consumption dynamics. Consumption is employed in the economic sense: use of medicinal plants by consumers or in the production of other goods. Methods PubMed and Web of Knowledge (formerly Web of Science) were searched using a set of medicinal plant key terms (folk/peasant/rural/traditional/ethno/indigenous/CAM/herbal/botanical/phytotherapy); each search terms was combined with terms related to medicinal plant consumption dynamics (medicinal plants/health care/preference/trade/treatment seeking behavior/domestication/sustainability/conservation/urban/migration/climate change/policy/production systems). To eliminate studies not directly focused on medicinal plant consumption, searches were limited by a number of terms (chemistry/clinical/in vitro/antibacterial/dose/molecular/trial/efficacy/antimicrobial/alkaloid/bioactive/inhibit/antibody/purification/antioxidant/DNA/rat/aqueous). A total of 1940 references were identified; manual screening for relevance reduced this to 645 relevant documents. As the conceptual framework emerged inductively, additional targeted literature searches were undertaken on specific factors and link, bringing the final number of references to 737. Results The paper first defines the four main groups of medicinal plant users (1. Hunter-gatherers, 2. Farmers and pastoralists, 3. Urban and peri-urban people, 4. Entrepreneurs) and the three main types of benefits (consumer, producer, society-wide) derived from medicinal plants usage. Then a single unified conceptual framework for understanding the factors influencing medicinal plant consumption in the economic sense is proposed; the framework distinguishes four spatial levels of analysis (international, national, local, household) and identifies and describes 15 factors and their relationships. Conclusions The framework provides a basis for increasing our conceptual understanding of medicinal plant consumption dynamics, allows a positioning of existing studies, and can serve to guide future research in the area. This would inform the formation of future health and natural resource management policies. PMID:23148504

  4. Data-driven discovery of partial differential equations.

    PubMed

    Rudy, Samuel H; Brunton, Steven L; Proctor, Joshua L; Kutz, J Nathan

    2017-04-01

    We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg-de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable.

  5. A loosely coupled framework for terminology controlled distributed EHR search for patient cohort identification in clinical research.

    PubMed

    Zhao, Lei; Lim Choi Keung, Sarah N; Taweel, Adel; Tyler, Edward; Ogunsina, Ire; Rossiter, James; Delaney, Brendan C; Peterson, Kevin A; Hobbs, F D Richard; Arvanitis, Theodoros N

    2012-01-01

    Heterogeneous data models and coding schemes for electronic health records present challenges for automated search across distributed data sources. This paper describes a loosely coupled software framework based on the terminology controlled approach to enable the interoperation between the search interface and heterogeneous data sources. Software components interoperate via common terminology service and abstract criteria model so as to promote component reuse and incremental system evolution.

  6. Research Review: Reading Comprehension in Developmental Disorders of Language and Communication

    ERIC Educational Resources Information Center

    Ricketts, Jessie

    2011-01-01

    Background: Deficits in reading airment (SLI), Down syndrome (DS) and autism spectrum disorders (ASD). Methods: In this review (based on a search of the ISI Web of Knowledge database to 2011), the Simple View of Reading is used as a framework for considering reading comprehension in these groups. Conclusions: There is substantial evidence for…

  7. Deep first formal concept search.

    PubMed

    Zhang, Tao; Li, Hui; Hong, Wenxue; Yuan, Xiamei; Wei, Xinyu

    2014-01-01

    The calculation of formal concepts is a very important part in the theory of formal concept analysis (FCA); however, within the framework of FCA, computing all formal concepts is the main challenge because of its exponential complexity and difficulty in visualizing the calculating process. With the basic idea of Depth First Search, this paper presents a visualization algorithm by the attribute topology of formal context. Limited by the constraints and calculation rules, all concepts are achieved by the visualization global formal concepts searching, based on the topology degenerated with the fixed start and end points, without repetition and omission. This method makes the calculation of formal concepts precise and easy to operate and reflects the integrity of the algorithm, which enables it to be suitable for visualization analysis.

  8. Stellar Wakes from Dark Matter Subhalos

    NASA Astrophysics Data System (ADS)

    Buschmann, Malte; Kopp, Joachim; Safdi, Benjamin R.; Wu, Chih-Liang

    2018-05-01

    We propose a novel method utilizing stellar kinematic data to detect low-mass substructure in the Milky Way's dark matter halo. By probing characteristic wakes that a passing dark matter subhalo leaves in the phase-space distribution of ambient halo stars, we estimate sensitivities down to subhalo masses of ˜107 M⊙ or below. The detection of such subhalos would have implications for dark matter and cosmological models that predict modifications to the halo-mass function at low halo masses. We develop an analytic formalism for describing the perturbed stellar phase-space distributions, and we demonstrate through idealized simulations the ability to detect subhalos using the phase-space model and a likelihood framework. Our method complements existing methods for low-mass subhalo searches, such as searches for gaps in stellar streams, in that we can localize the positions and velocities of the subhalos today.

  9. An algebra-based method for inferring gene regulatory networks

    PubMed Central

    2014-01-01

    Background The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. Results This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the dynamic patterns present in the network. Conclusions Boolean polynomial dynamical systems provide a powerful modeling framework for the reverse engineering of gene regulatory networks, that enables a rich mathematical structure on the model search space. A C++ implementation of the method, distributed under LPGL license, is available, together with the source code, at http://www.paola-vera-licona.net/Software/EARevEng/REACT.html. PMID:24669835

  10. Framework of outcome measures recommended for use in the evaluation of childhood obesity treatment interventions: the CoOR framework.

    PubMed

    Bryant, M; Ashton, L; Nixon, J; Jebb, S; Wright, J; Roberts, K; Brown, J

    2014-12-01

    Consensus is lacking in determining appropriate outcome measures for assessment of childhood obesity treatments. Inconsistency in the use and reporting of such measures impedes comparisons between treatments and limits consideration of effectiveness. This study aimed to produce a framework of recommended outcome measures: the Childhood obesity treatment evaluation Outcomes Review (CoOR) framework. A systematic review including two searches was conducted to identify (1) existing trial outcome measures and (2) manuscripts describing development/evaluation of outcome measures. Outcomes included anthropometry, diet, eating behaviours, physical activity, sedentary time/behaviour, fitness, physiology, environment, psychological well-being and health-related quality of life. Eligible measures were appraised by the internal team using a system developed from international guidelines, followed by appraisal from national external expert collaborators. A total of 25,486 papers were identified through both searches. Eligible search 1 trial papers cited 417 additional papers linked to outcome measures, of which 56 were eligible. A further 297 outcome development/evaluation papers met eligibility criteria from search 2. Combined, these described 191 outcome measures. After internal and external appraisal, 52 measures across 10 outcomes were recommended for inclusion in the CoOR framework. Application of the CoOR framework will ensure greater consistency in choosing robust outcome measures that are appropriate to population characteristics. © 2014 The Authors. Pediatric Obesity © 2014 International Association for the Study of Obesity.

  11. Tree decomposition based fast search of RNA structures including pseudoknots in genomes.

    PubMed

    Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming

    2005-01-01

    Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.

  12. PyCPR - a python-based implementation of the Conjugate Peak Refinement (CPR) algorithm for finding transition state structures.

    PubMed

    Gisdon, Florian J; Culka, Martin; Ullmann, G Matthias

    2016-10-01

    Conjugate peak refinement (CPR) is a powerful and robust method to search transition states on a molecular potential energy surface. Nevertheless, the method was to the best of our knowledge so far only implemented in CHARMM. In this paper, we present PyCPR, a new Python-based implementation of the CPR algorithm within the pDynamo framework. We provide a detailed description of the theory underlying our implementation and discuss the different parts of the implementation. The method is applied to two different problems. First, we illustrate the method by analyzing the gauche to anti-periplanar transition of butane using a semiempirical QM method. Second, we reanalyze the mechanism of a glycyl-radical enzyme, namely of 4-hydroxyphenylacetate decarboxylase (HPD) using QM/MM calculations. In the end, we suggest a strategy how to use our implementation of the CPR algorithm. The integration of PyCPR into the framework pDynamo allows the combination of CPR with the large variety of methods implemented in pDynamo. PyCPR can be used in combination with quantum mechanical and molecular mechanical methods (and hybrid methods) implemented directly in pDynamo, but also in combination with external programs such as ORCA using pDynamo as interface. PyCPR is distributed as free, open source software and can be downloaded from http://www.bisb.uni-bayreuth.de/index.php?page=downloads . Graphical Abstract PyCPR is a search tool for finding saddle points on the potential energy landscape of a molecular system.

  13. Robust object tacking based on self-adaptive search area

    NASA Astrophysics Data System (ADS)

    Dong, Taihang; Zhong, Sheng

    2018-02-01

    Discriminative correlation filter (DCF) based trackers have recently achieved excellent performance with great computational efficiency. However, DCF based trackers suffer boundary effects, which result in the unstable performance in challenging situations exhibiting fast motion. In this paper, we propose a novel method to mitigate this side-effect in DCF based trackers. We change the search area according to the prediction of target motion. When the object moves fast, broad search area could alleviate boundary effects and reserve the probability of locating object. When the object moves slowly, narrow search area could prevent effect of useless background information and improve computational efficiency to attain real-time performance. This strategy can impressively soothe boundary effects in situations exhibiting fast motion and motion blur, and it can be used in almost all DCF based trackers. The experiments on OTB benchmark show that the proposed framework improves the performance compared with the baseline trackers.

  14. PISA: Federated Search in P2P Networks with Uncooperative Peers

    NASA Astrophysics Data System (ADS)

    Ren, Zujie; Shou, Lidan; Chen, Gang; Chen, Chun; Bei, Yijun

    Recently, federated search in P2P networks has received much attention. Most of the previous work assumed a cooperative environment where each peer can actively participate in information publishing and distributed document indexing. However, little work has addressed the problem of incorporating uncooperative peers, which do not publish their own corpus statistics, into a network. This paper presents a P2P-based federated search framework called PISA which incorporates uncooperative peers as well as the normal ones. In order to address the indexing needs for uncooperative peers, we propose a novel heuristic query-based sampling approach which can obtain high-quality resource descriptions from uncooperative peers at relatively low communication cost. We also propose an effective method called RISE to merge the results returned by uncooperative peers. Our experimental results indicate that PISA can provide quality search results, while utilizing the uncooperative peers at a low cost.

  15. An effective hybrid immune algorithm for solving the distributed permutation flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Xu, Ye; Wang, Ling; Wang, Shengyao; Liu, Min

    2014-09-01

    In this article, an effective hybrid immune algorithm (HIA) is presented to solve the distributed permutation flow-shop scheduling problem (DPFSP). First, a decoding method is proposed to transfer a job permutation sequence to a feasible schedule considering both factory dispatching and job sequencing. Secondly, a local search with four search operators is presented based on the characteristics of the problem. Thirdly, a special crossover operator is designed for the DPFSP, and mutation and vaccination operators are also applied within the framework of the HIA to perform an immune search. The influence of parameter setting on the HIA is investigated based on the Taguchi method of design of experiment. Extensive numerical testing results based on 420 small-sized instances and 720 large-sized instances are provided. The effectiveness of the HIA is demonstrated by comparison with some existing heuristic algorithms and the variable neighbourhood descent methods. New best known solutions are obtained by the HIA for 17 out of 420 small-sized instances and 585 out of 720 large-sized instances.

  16. U.S. states and territories national tsunami hazard assessment, historic record and sources for waves

    NASA Astrophysics Data System (ADS)

    Dunbar, P. K.; Weaver, C.

    2007-12-01

    In 2005, the U.S. National Science and Technology Council (NSTC) released a joint report by the sub-committee on Disaster Reduction and the U.S. Group on Earth Observations titled Tsunami Risk Reduction for the United States: A Framework for Action (Framework). The Framework outlines the President's&pstrategy for reducing the United States tsunami risk. The first specific action called for in the Framework is to "Develop standardized and coordinated tsunami hazard and risk assessments for all coastal regions of the United States and its territories." Since NOAA is the lead agency for providing tsunami forecasts and warnings and NOAA's National Geophysical Data Center (NGDC) catalogs information on global historic tsunamis, NOAA/NGDC was asked to take the lead in conducting the first national tsunami hazard assessment. Earthquakes or earthquake-generated landslides caused more than 85% of the tsunamis in the NGDC tsunami database. Since the United States Geological Survey (USGS) conducts research on earthquake hazards facing all of the United States and its territories, NGDC and USGS partnered together to conduct the first tsunami hazard assessment for the United States and its territories. A complete tsunami hazard and risk assessment consists of a hazard assessment, exposure and vulnerability assessment of buildings and people, and loss assessment. This report is an interim step towards a tsunami risk assessment. The goal of this report is provide a qualitative assessment of the United States tsunami hazard at the national level. Two different methods are used to assess the U.S. tsunami hazard. The first method involves a careful examination of the NGDC historical tsunami database. This resulted in a qualitative national tsunami hazard assessment based on the distribution of runup heights and the frequency of runups. Although tsunami deaths are a measure of risk rather than hazard, the known tsunami deaths found in the NGDC database search were compared with the qualitative assessments based on frequency and amplitude. The second method to assess tsunami hazard involved using the USGS earthquake databases to search for possible earthquake sources near American coastlines to extend the NOAA/NGDC tsunami databases backward in time. The qualitative tsunami hazard assessment based on the results of the NGDC and USGS database searches will be presented.

  17. Parallel Computational Protein Design.

    PubMed

    Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang

    2017-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.

  18. A Framework for Cloudy Model Optimization and Database Storage

    NASA Astrophysics Data System (ADS)

    Calvén, Emilia; Helton, Andrew; Sankrit, Ravi

    2018-01-01

    We present a framework for producing Cloudy photoionization models of the nebular emission from novae ejecta and storing a subset of the results in SQL database format for later usage. The database can be searched for models best fitting observed spectral line ratios. Additionally, the framework includes an optimization feature that can be used in tandem with the database to search for and improve on models by creating new Cloudy models while, varying the parameters. The database search and optimization can be used to explore the structures of nebulae by deriving their properties from the best-fit models. The goal is to provide the community with a large database of Cloudy photoionization models, generated from parameters reflecting conditions within novae ejecta, that can be easily fitted to observed spectral lines; either by directly accessing the database using the framework code or by usage of a website specifically made for this purpose.

  19. Hybrid Self-Adaptive Evolution Strategies Guided by Neighborhood Structures for Combinatorial Optimization Problems.

    PubMed

    Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G

    2016-01-01

    This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.

  20. A predictive machine learning approach for microstructure optimization and materials design

    NASA Astrophysics Data System (ADS)

    Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; Agrawal, Ankit; Sundararaghavan, Veera; Choudhary, Alok

    2015-06-01

    This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniqueness of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. Experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.

  1. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs

    PubMed Central

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-01-01

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082

  2. Serendipity in dark photon searches

    NASA Astrophysics Data System (ADS)

    Ilten, Philip; Soreq, Yotam; Williams, Mike; Xue, Wei

    2018-06-01

    Searches for dark photons provide serendipitous discovery potential for other types of vector particles. We develop a framework for recasting dark photon searches to obtain constraints on more general theories, which includes a data-driven method for determining hadronic decay rates. We demonstrate our approach by deriving constraints on a vector that couples to the B-L current, a leptophobic B boson that couples directly to baryon number and to leptons via B- γ kinetic mixing, and on a vector that mediates a protophobic force. Our approach can easily be generalized to any massive gauge boson with vector couplings to the Standard Model fermions, and software to perform any such recasting is provided at https://gitlab.com/philten/darkcast .

  3. eHealth Search Patterns: A Comparison of Private and Public Health Care Markets Using Online Panel Data

    PubMed Central

    2017-01-01

    Background Patient and consumer access to eHealth information is of crucial importance because of its role in patient-centered medicine and to improve knowledge about general aspects of health and medical topics. Objectives The objectives were to analyze and compare eHealth search patterns in a private (United States) and a public (United Kingdom) health care market. Methods A new taxonomy of eHealth websites is proposed to organize the largest eHealth websites. An online measurement framework is developed that provides a precise and detailed measurement system. Online panel data are used to accurately track and analyze detailed search behavior across 100 of the largest eHealth websites in the US and UK health care markets. Results The health, medical, and lifestyle categories account for approximately 90% of online activity, and e-pharmacies, social media, and professional categories account for the remaining 10% of online activity. Overall search penetration of eHealth websites is significantly higher in the private (United States) than the public market (United Kingdom). Almost twice the number of eHealth users in the private market have adopted online search in the health and lifestyle categories and also spend more time per website than those in the public market. The use of medical websites for specific conditions is almost identical in both markets. The allocation of search effort across categories is similar in both the markets. For all categories, the vast majority of eHealth users only access one website within each category. Those that conduct a search of two or more websites display very narrow search patterns. All users spend relatively little time on eHealth, that is, 3-7 minutes per website. Conclusions The proposed online measurement framework exploits online panel data to provide a powerful and objective method of analyzing and exploring eHealth behavior. The private health care system does appear to have an influence on eHealth search behavior in terms of search penetration and time spent per website in the health and lifestyle categories. Two explanations are offered: (1) the personal incentive of medical costs in the private market incentivizes users to conduct online search; and (2) health care information is more easily accessible through health care professionals in the United Kingdom compared with the United States. However, the use of medical websites is almost identical, suggesting that patients interested in a specific condition have a motivation to search and evaluate health information, irrespective of the health care market. The relatively low level of search in terms of the number of websites accessed and the average time per website raise important questions about the actual level of patient informedness in both the markets. Areas for future research are outlined. PMID:28408362

  4. eTACTS: a method for dynamically filtering clinical trial search results.

    PubMed

    Miotto, Riccardo; Jiang, Silis; Weng, Chunhua

    2013-12-01

    Information overload is a significant problem facing online clinical trial searchers. We present eTACTS, a novel interactive retrieval framework using common eligibility tags to dynamically filter clinical trial search results. eTACTS mines frequent eligibility tags from free-text clinical trial eligibility criteria and uses these tags for trial indexing. After an initial search, eTACTS presents to the user a tag cloud representing the current results. When the user selects a tag, eTACTS retains only those trials containing that tag in their eligibility criteria and generates a new cloud based on tag frequency and co-occurrences in the remaining trials. The user can then select a new tag or unselect a previous tag. The process iterates until a manageable number of trials is returned. We evaluated eTACTS in terms of filtering efficiency, diversity of the search results, and user eligibility to the filtered trials using both qualitative and quantitative methods. eTACTS (1) rapidly reduced search results from over a thousand trials to ten; (2) highlighted trials that are generally not top-ranked by conventional search engines; and (3) retrieved a greater number of suitable trials than existing search engines. eTACTS enables intuitive clinical trial searches by indexing eligibility criteria with effective tags. User evaluation was limited to one case study and a small group of evaluators due to the long duration of the experiment. Although a larger-scale evaluation could be conducted, this feasibility study demonstrated significant advantages of eTACTS over existing clinical trial search engines. A dynamic eligibility tag cloud can potentially enhance state-of-the-art clinical trial search engines by allowing intuitive and efficient filtering of the search result space. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Essential Annotation Schema for Ecology (EASE)-A framework supporting the efficient data annotation and faceted navigation in ecology.

    PubMed

    Pfaff, Claas-Thido; Eichenberg, David; Liebergesell, Mario; König-Ries, Birgitta; Wirth, Christian

    2017-01-01

    Ecology has become a data intensive science over the last decades which often relies on the reuse of data in cross-experimental analyses. However, finding data which qualifies for the reuse in a specific context can be challenging. It requires good quality metadata and annotations as well as efficient search strategies. To date, full text search (often on the metadata only) is the most widely used search strategy although it is known to be inaccurate. Faceted navigation is providing a filter mechanism which is based on fine granular metadata, categorizing search objects along numeric and categorical parameters relevant for their discovery. Selecting from these parameters during a full text search creates a system of filters which allows to refine and improve the results towards more relevance. We developed a framework for the efficient annotation and faceted navigation in ecology. It consists of an XML schema for storing the annotation of search objects and is accompanied by a vocabulary focused on ecology to support the annotation process. The framework consolidates ideas which originate from widely accepted metadata standards, textbooks, scientific literature, and vocabularies as well as from expert knowledge contributed by researchers from ecology and adjacent disciplines.

  6. Toward a public analysis database for LHC new physics searches using M ADA NALYSIS 5

    NASA Astrophysics Data System (ADS)

    Dumont, B.; Fuks, B.; Kraml, S.; Bein, S.; Chalons, G.; Conte, E.; Kulkarni, S.; Sengupta, D.; Wymant, C.

    2015-02-01

    We present the implementation, in the MadAnalysis 5 framework, of several ATLAS and CMS searches for supersymmetry in data recorded during the first run of the LHC. We provide extensive details on the validation of our implementations and propose to create a public analysis database within this framework.

  7. Stellar Wakes from Dark Matter Subhalos.

    PubMed

    Buschmann, Malte; Kopp, Joachim; Safdi, Benjamin R; Wu, Chih-Liang

    2018-05-25

    We propose a novel method utilizing stellar kinematic data to detect low-mass substructure in the Milky Way's dark matter halo. By probing characteristic wakes that a passing dark matter subhalo leaves in the phase-space distribution of ambient halo stars, we estimate sensitivities down to subhalo masses of ∼10^{7}  M_{⊙} or below. The detection of such subhalos would have implications for dark matter and cosmological models that predict modifications to the halo-mass function at low halo masses. We develop an analytic formalism for describing the perturbed stellar phase-space distributions, and we demonstrate through idealized simulations the ability to detect subhalos using the phase-space model and a likelihood framework. Our method complements existing methods for low-mass subhalo searches, such as searches for gaps in stellar streams, in that we can localize the positions and velocities of the subhalos today.

  8. Shape optimization of pulsatile ventricular assist devices using FSI to minimize thrombotic risk

    NASA Astrophysics Data System (ADS)

    Long, C. C.; Marsden, A. L.; Bazilevs, Y.

    2014-10-01

    In this paper we perform shape optimization of a pediatric pulsatile ventricular assist device (PVAD). The device simulation is carried out using fluid-structure interaction (FSI) modeling techniques within a computational framework that combines FEM for fluid mechanics and isogeometric analysis for structural mechanics modeling. The PVAD FSI simulations are performed under realistic conditions (i.e., flow speeds, pressure levels, boundary conditions, etc.), and account for the interaction of air, blood, and a thin structural membrane separating the two fluid subdomains. The shape optimization study is designed to reduce thrombotic risk, a major clinical problem in PVADs. Thrombotic risk is quantified in terms of particle residence time in the device blood chamber. Methods to compute particle residence time in the context of moving spatial domains are presented in a companion paper published in the same issue (Comput Mech, doi: 10.1007/s00466-013-0931-y, 2013). The surrogate management framework, a derivative-free pattern search optimization method that relies on surrogates for increased efficiency, is employed in this work. For the optimization study shown here, particle residence time is used to define a suitable cost or objective function, while four adjustable design optimization parameters are used to define the device geometry. The FSI-based optimization framework is implemented in a parallel computing environment, and deployed with minimal user intervention. Using five SEARCH/ POLL steps the optimization scheme identifies a PVAD design with significantly better throughput efficiency than the original device.

  9. GeoSearch: A lightweight broking middleware for geospatial resources discovery

    NASA Astrophysics Data System (ADS)

    Gui, Z.; Yang, C.; Liu, K.; Xia, J.

    2012-12-01

    With petabytes of geodata, thousands of geospatial web services available over the Internet, it is critical to support geoscience research and applications by finding the best-fit geospatial resources from the massive and heterogeneous resources. Past decades' developments witnessed the operation of many service components to facilitate geospatial resource management and discovery. However, efficient and accurate geospatial resource discovery is still a big challenge due to the following reasons: 1)The entry barriers (also called "learning curves") hinder the usability of discovery services to end users. Different portals and catalogues always adopt various access protocols, metadata formats and GUI styles to organize, present and publish metadata. It is hard for end users to learn all these technical details and differences. 2)The cost for federating heterogeneous services is high. To provide sufficient resources and facilitate data discovery, many registries adopt periodic harvesting mechanism to retrieve metadata from other federated catalogues. These time-consuming processes lead to network and storage burdens, data redundancy, and also the overhead of maintaining data consistency. 3)The heterogeneous semantics issues in data discovery. Since the keyword matching is still the primary search method in many operational discovery services, the search accuracy (precision and recall) is hard to guarantee. Semantic technologies (such as semantic reasoning and similarity evaluation) offer a solution to solve these issues. However, integrating semantic technologies with existing service is challenging due to the expandability limitations on the service frameworks and metadata templates. 4)The capabilities to help users make final selection are inadequate. Most of the existing search portals lack intuitive and diverse information visualization methods and functions (sort, filter) to present, explore and analyze search results. Furthermore, the presentation of the value-added additional information (such as, service quality and user feedback), which conveys important decision supporting information, is missing. To address these issues, we prototyped a distributed search engine, GeoSearch, based on brokering middleware framework to search, integrate and visualize heterogeneous geospatial resources. Specifically, 1) A lightweight discover broker is developed to conduct distributed search. The broker retrieves metadata records for geospatial resources and additional information from dispersed services (portals and catalogues) and other systems on the fly. 2) A quality monitoring and evaluation broker (i.e., QoS Checker) is developed and integrated to provide quality information for geospatial web services. 3) The semantic assisted search and relevance evaluation functions are implemented by loosely interoperating with ESIP Testbed component. 4) Sophisticated information and data visualization functionalities and tools are assembled to improve user experience and assist resource selection.

  10. Mass Spectra-Based Framework for Automated Structural Elucidation of Metabolome Data to Explore Phytochemical Diversity

    PubMed Central

    Matsuda, Fumio; Nakabayashi, Ryo; Sawada, Yuji; Suzuki, Makoto; Hirai, Masami Y.; Kanaya, Shigehiko; Saito, Kazuki

    2011-01-01

    A novel framework for automated elucidation of metabolite structures in liquid chromatography–mass spectrometer metabolome data was constructed by integrating databases. High-resolution tandem mass spectra data automatically acquired from each metabolite signal were used for database searches. Three distinct databases, KNApSAcK, ReSpect, and the PRIMe standard compound database, were employed for the structural elucidation. The outputs were retrieved using the CAS metabolite identifier for identification and putative annotation. A simple metabolite ontology system was also introduced to attain putative characterization of the metabolite signals. The automated method was applied for the metabolome data sets obtained from the rosette leaves of 20 Arabidopsis accessions. Phenotypic variations in novel Arabidopsis metabolites among these accessions could be investigated using this method. PMID:22645535

  11. Service quality framework for clinical laboratories.

    PubMed

    Ramessur, Vinaysing; Hurreeram, Dinesh Kumar; Maistry, Kaylasson

    2015-01-01

    The purpose of this paper is to illustrate a service quality framework that enhances service delivery in clinical laboratories by gauging medical practitioner satisfaction and by providing avenues for continuous improvement. The case study method has been used for conducting the exploratory study, with focus on the Mauritian public clinical laboratory. A structured questionnaire based on the SERVQUAL service quality model was used for data collection, analysis and for the development of the service quality framework. The study confirms the pertinence of the following service quality dimensions within the context of clinical laboratories: tangibility, reliability, responsiveness, turnaround time, technology, test reports, communication and laboratory staff attitude and behaviour. The service quality framework developed, termed LabSERV, is vital for clinical laboratories in the search for improving service delivery to medical practitioners. This is a pioneering work carried out in the clinical laboratory sector in Mauritius. Medical practitioner expectations and perceptions have been simultaneously considered to generate a novel service quality framework for clinical laboratories.

  12. Data-driven discovery of partial differential equations

    PubMed Central

    Rudy, Samuel H.; Brunton, Steven L.; Proctor, Joshua L.; Kutz, J. Nathan

    2017-01-01

    We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg–de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable. PMID:28508044

  13. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    NASA Astrophysics Data System (ADS)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  14. An algorithmic framework for multiobjective optimization.

    PubMed

    Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.

  15. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  16. Human body segmentation via data-driven graph cut.

    PubMed

    Li, Shifeng; Lu, Huchuan; Shao, Xingqing

    2014-11-01

    Human body segmentation is a challenging and important problem in computer vision. Existing methods usually entail a time-consuming training phase for prior knowledge learning with complex shape matching for body segmentation. In this paper, we propose a data-driven method that integrates top-down body pose information and bottom-up low-level visual cues for segmenting humans in static images within the graph cut framework. The key idea of our approach is first to exploit human kinematics to search for body part candidates via dynamic programming for high-level evidence. Then, by using the body parts classifiers, obtaining bottom-up cues of human body distribution for low-level evidence. All the evidence collected from top-down and bottom-up procedures are integrated in a graph cut framework for human body segmentation. Qualitative and quantitative experiment results demonstrate the merits of the proposed method in segmenting human bodies with arbitrary poses from cluttered backgrounds.

  17. Context matters: the structure of task goals affects accuracy in multiple-target visual search.

    PubMed

    Clark, Kait; Cain, Matthew S; Adcock, R Alison; Mitroff, Stephen R

    2014-05-01

    Career visual searchers such as radiologists and airport security screeners strive to conduct accurate visual searches, but despite extensive training, errors still occur. A key difference between searches in radiology and airport security is the structure of the search task: Radiologists typically scan a certain number of medical images (fixed objective), and airport security screeners typically search X-rays for a specified time period (fixed duration). Might these structural differences affect accuracy? We compared performance on a search task administered either under constraints that approximated radiology or airport security. Some displays contained more than one target because the presence of multiple targets is an established source of errors for career searchers, and accuracy for additional targets tends to be especially sensitive to contextual conditions. Results indicate that participants searching within the fixed objective framework produced more multiple-target search errors; thus, adopting a fixed duration framework could improve accuracy for career searchers. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  18. Comparability of outcome frameworks in medical education: Implications for framework development.

    PubMed

    Hautz, Stefanie C; Hautz, Wolf E; Feufel, Markus A; Spies, Claudia D

    2015-01-01

    Given the increasing mobility of medical students and practitioners, there is a growing need for harmonization of medical education and qualifications. Although several initiatives have sought to compare national outcome frameworks, this task has proven a challenge. Drawing on an analysis of existing outcome frameworks, we identify factors that hinder comparability and suggest ways of facilitating comparability during framework development and revisions. We searched MedLine, EmBase and the Internet for outcome frameworks in medical education published by national or governmental organizations. We analyzed these frameworks for differences and similarities that influence comparability. Of 1816 search results, 13 outcome frameworks met our inclusion criteria. These frameworks differ in five core features: history and origins, formal structure, medical education system, target audience and key terms. Many frameworks reference other frameworks without acknowledging these differences. Importantly, the level of detail of the outcomes specified differs both within and between frameworks. The differences identified explain some of the challenges involved in comparing outcome frameworks and medical qualifications. We propose a two-level model distinguishing between "core" competencies and culture-specific "secondary" competencies. This approach could strike a balance between local specifics and cross-national comparability of outcome frameworks and medical education.

  19. Mixed methods systematic review exploring mentorship outcomes in nursing academia.

    PubMed

    Nowell, Lorelli; Norris, Jill M; Mrklas, Kelly; White, Deborah E

    2017-03-01

    The aim of this study was to report on a mixed methods systematic review that critically examines the evidence for mentorship in nursing academia. Nursing education institutions globally have issued calls for mentorship. There is emerging evidence to support the value of mentorship in other disciplines, but the extant state of the evidence in nursing academia is not known. A comprehensive review of the evidence is required. A mixed methods systematic review. Five databases (MEDLINE, CINAHL, EMBASE, ERIC, PsycINFO) were searched using an a priori search strategy from inception to 2 November 2015 to identify quantitative, qualitative and mixed methods studies. Grey literature searches were also conducted in electronic databases (ProQuest Dissertations and Theses, Index to Theses) and mentorship conference proceedings and by hand searching the reference lists of eligible studies. Study quality was assessed prior to inclusion using standardized critical appraisal instruments from the Joanna Briggs Institute. A convergent qualitative synthesis design was used where results from qualitative, quantitative and mixed methods studies were transformed into qualitative findings. Mentorship outcomes were mapped to a theory-informed framework. Thirty-four studies were included in this review, from the 3001 records initially retrieved. In general, mentorship had a positive impact on behavioural, career, attitudinal, relational and motivational outcomes; however, the methodological quality of studies was weak. This review can inform the objectives of mentorship interventions and contribute to a more rigorous approach to studies that assess mentorship outcomes. © 2016 John Wiley & Sons Ltd.

  20. British American Tobacco on Facebook: undermining article 13 of the global World Health Organization Framework Convention on Tobacco Control

    PubMed Central

    Chapman, Simon

    2010-01-01

    Background The World Health Organization Framework Convention on Tobacco Control (WHO FCTC) bans all forms of tobacco advertising, promotion and sponsorship. The comprehensiveness of this ban has yet to be tested by online social networking media such as Facebook. In this paper, the activities of employees of the transnational tobacco company, British American Tobacco, (BAT) on Facebook and the type of content associated with two globally popular BAT brands (Dunhill and Lucky Strike) are mapped. Methods BAT employees on Facebook were identified and then the term ‘British American Tobacco’ was searched for in the Facebook search engine and results recorded, including titles, descriptions, names and the number of Facebook participants involved for each search result. To further detail any potential promotional activities, a search for two of BAT's global brands, ‘Dunhill’ and ‘Lucky Strike’, was conducted. Results Each of the 3 search terms generated more than 500 items across a variety of Facebook subsections. Discussion Some BAT employees are energetically promoting BAT and BAT brands on Facebook through joining and administrating groups, joining pages as fans and posting photographs of BAT events, products and promotional items. BAT employees undertaking these actions are from countries that have ratified the WHO FCTC, which requires signatories to ban all forms of tobacco advertising, including online and crossborder exposure from countries that are not enforcing advertising restrictions. The results of the present research could be used to test the comprehensiveness of the advertising ban by requesting that governments mandate the removal of this promotional material from Facebook. PMID:20395406

  1. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    PubMed Central

    Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-01-01

    Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. PMID:26673332

  2. Multilevel Optimization Framework for Hierarchical Stiffened Shells Accelerated by Adaptive Equivalent Strategy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong

    2017-06-01

    In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.

  3. Digital Dimension Disruption: A National Security Enterprise Response

    DTIC Science & Technology

    2017-12-21

    societal institutions, methods of business, and fundamental ideas about national security. This realignment will, of necessity, change the frameworks...humans did calculations and searched for information. In the past quarter century, human use of computers has changed fundamentally , but com- mon...the nature of data is, itself, undergoing a fundamental change. The terms “bespoke data” (from the British term for cus- tom-tailored) and “by

  4. Partial branch and bound algorithm for improved data association in multiframe processing

    NASA Astrophysics Data System (ADS)

    Poore, Aubrey B.; Yan, Xin

    1999-07-01

    A central problem in multitarget, multisensor, and multiplatform tracking remains that of data association. Lagrangian relaxation methods have shown themselves to yield near optimal answers in real-time. The necessary improvement in the quality of these solutions warrants a continuing interest in these methods. These problems are NP-hard; the only known methods for solving them optimally are enumerative in nature with branch-and-bound being most efficient. Thus, the development of methods less than a full branch-and-bound are needed for improving the quality. Such methods as K-best, local search, and randomized search have been proposed to improve the quality of the relaxation solution. Here, a partial branch-and-bound technique along with adequate branching and ordering rules are developed. Lagrangian relaxation is used as a branching method and as a method to calculate the lower bound for subproblems. The result shows that the branch-and-bound framework greatly improves the resolution quality of the Lagrangian relaxation algorithm and yields better multiple solutions in less time than relaxation alone.

  5. Teaching Non-Recursive Binary Searching: Establishing a Conceptual Framework.

    ERIC Educational Resources Information Center

    Magel, E. Terry

    1989-01-01

    Discusses problems associated with teaching non-recursive binary searching in computer language classes, and describes a teacher-directed dialog based on dictionary use that helps students use their previous searching experiences to conceptualize the binary search process. Algorithmic development is discussed and appropriate classroom discussion…

  6. Federated Space-Time Query for Earth Science Data Using OpenSearch Conventions

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Beaumont, B.; Duerr, R. E.; Hua, H.

    2009-12-01

    The past decade has seen a burgeoning of remote sensing and Earth science data providers, as evidenced in the growth of the Earth Science Information Partner (ESIP) federation. At the same time, the need to combine diverse data sets to enable understanding of the Earth as a system has also grown. While the expansion of data providers is in general a boon to such studies, the diversity presents a challenge to finding useful data for a given study. Locating all the data files with aerosol information for a particular volcanic eruption, for example, may involve learning and using several different search tools to execute the requisite space-time queries. To address this issue, the ESIP federation is developing a federated space-time query framework, based on the OpenSearch convention (www.opensearch.org), with Geo and Time extensions. In this framework, data providers publish OpenSearch Description Documents that describe in a machine-readable form how to execute queries against the provider. The novelty of OpenSearch is that the space-time query interface becomes both machine callable and easy enough to integrate into the web browser's search box. This flexibility, together with a simple REST (HTTP-get) interface, should allow a variety of data providers to participate in the federated search framework, from large institutional data centers to individual scientists. The simple interface enables trivial querying of multiple data sources and participation in recursive-like federated searches--all using the same common OpenSearch interface. This simplicity also makes the construction of clients easy, as does existing OpenSearch client libraries in a variety of languages. Moreover, a number of clients and aggregation services already exist and OpenSearch is already supported by a number of web browsers such as Firefox and Internet Explorer.

  7. Essential Annotation Schema for Ecology (EASE)—A framework supporting the efficient data annotation and faceted navigation in ecology

    PubMed Central

    Eichenberg, David; Liebergesell, Mario; König-Ries, Birgitta; Wirth, Christian

    2017-01-01

    Ecology has become a data intensive science over the last decades which often relies on the reuse of data in cross-experimental analyses. However, finding data which qualifies for the reuse in a specific context can be challenging. It requires good quality metadata and annotations as well as efficient search strategies. To date, full text search (often on the metadata only) is the most widely used search strategy although it is known to be inaccurate. Faceted navigation is providing a filter mechanism which is based on fine granular metadata, categorizing search objects along numeric and categorical parameters relevant for their discovery. Selecting from these parameters during a full text search creates a system of filters which allows to refine and improve the results towards more relevance. We developed a framework for the efficient annotation and faceted navigation in ecology. It consists of an XML schema for storing the annotation of search objects and is accompanied by a vocabulary focused on ecology to support the annotation process. The framework consolidates ideas which originate from widely accepted metadata standards, textbooks, scientific literature, and vocabularies as well as from expert knowledge contributed by researchers from ecology and adjacent disciplines. PMID:29023519

  8. Where the item still rules supreme: Time-based selection, enumeration, pre-attentive processing and the target template?

    PubMed

    Watson, Derrick G

    2017-01-01

    I propose that there remains a central role for the item (or its equivalent) in a wider range of search and search-related tasks/functions than might be conveyed by the article. I consider the functional relationship between the framework and some aspects of previous theories, and suggest some challenges that the new framework might encounter.

  9. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and UrbIS

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Hook, L.; Killeffer, T.; Krassovski, M.; Boden, T.; King, A. W.; Wullschleger, S. D.

    2016-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This discussion describes tools being used in two different projects at Oak Ridge National Laboratory (ORNL), but at different stages of the data lifecycle. The Metadata Entry and Data Search Tool is being used for the documentation, archival, and data discovery stages for the Next Generation Ecosystem Experiment - Arctic (NGEE Arctic) project while the Urban Information Systems (UrbIS) Data Catalog is being used to support indexing, cataloging, and searching. The NGEE Arctic Online Metadata Entry Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The UrbIS Data Catalog is a data discovery tool supported by the Mercury cataloging framework [2] which aims to compile urban environmental data from around the world into one location, and be searchable via a user-friendly interface. Each data record conveniently displays its title, source, and date range, and features: (1) a button for a quick view of the metadata, (2) a direct link to the data and, for some data sets, (3) a button for visualizing the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for searching by area. References: [1] Devarakonda, Ranjeet, et al. "Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example." Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. [2] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics, 3(1-2), 87-94.

  10. Methods, procedures, and contextual characteristics of health technology assessment and health policy decision making: comparison of health technology assessment agencies in Germany, United Kingdom, France, and Sweden.

    PubMed

    Schwarzer, Ruth; Siebert, Uwe

    2009-07-01

    The objectives of this study were (i) to develop a systematic framework for describing and comparing different features of health technology assessment (HTA) agencies, (ii) to identify and describe similarities and differences between the agencies, and (iii) to draw conclusions both for producers and users of HTA in research, policy, and practice. We performed a systematic literature search, added information from HTA agencies, and developed a conceptual framework comprising eight main domains: organization, scope, processes, methods, dissemination, decision, implementation, and impact. We grouped relevant items of these domains in an evidence table and chose five HTA agencies to test our framework: DAHTA@DIMDI, HAS, IQWiG, NICE, and SBU. Item and domain similarity was assessed using the percentage of identical characteristics in pairwise comparisons across agencies. RESULTS were interpreted across agencies by demonstrating similarities and differences. Based on 306 included documents, we identified 90 characteristics of eight main domains appropriate for our framework. After applying the framework to the five agencies, we were able to show 40 percent similarities in "dissemination," 38 percent in "scope," 35 percent in "organization," 29 percent in "methods," 26 percent in "processes," 23 percent in "impact," 19 percent in "decision," and 17 percent in "implementation." We found considerably more differences than similarities of HTA features across agencies and countries. Our framework and comparison provides insights and clarification into the need for harmonization. Our findings could serve as descriptive database facilitating communication between producers and users.

  11. A framework for automatic information quality ranking of diabetes websites.

    PubMed

    Belen Sağlam, Rahime; Taskaya Temizel, Tugba

    2015-01-01

    Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).

  12. A framework for medical image retrieval using machine learning and statistical similarity matching techniques with relevance feedback.

    PubMed

    Rahman, Md Mahmudur; Bhattacharya, Prabir; Desai, Bipin C

    2007-01-01

    A content-based image retrieval (CBIR) framework for diverse collection of medical images of different imaging modalities, anatomic regions with different orientations and biological systems is proposed. Organization of images in such a database (DB) is well defined with predefined semantic categories; hence, it can be useful for category-specific searching. The proposed framework consists of machine learning methods for image prefiltering, similarity matching using statistical distance measures, and a relevance feedback (RF) scheme. To narrow down the semantic gap and increase the retrieval efficiency, we investigate both supervised and unsupervised learning techniques to associate low-level global image features (e.g., color, texture, and edge) in the projected PCA-based eigenspace with their high-level semantic and visual categories. Specially, we explore the use of a probabilistic multiclass support vector machine (SVM) and fuzzy c-mean (FCM) clustering for categorization and prefiltering of images to reduce the search space. A category-specific statistical similarity matching is proposed in a finer level on the prefiltered images. To incorporate a better perception subjectivity, an RF mechanism is also added to update the query parameters dynamically and adjust the proposed matching functions. Experiments are based on a ground-truth DB consisting of 5000 diverse medical images of 20 predefined categories. Analysis of results based on cross-validation (CV) accuracy and precision-recall for image categorization and retrieval is reported. It demonstrates the improvement, effectiveness, and efficiency achieved by the proposed framework.

  13. Chaos synchronization and Nelder-Mead search for parameter estimation in nonlinear pharmacological systems: Estimating tumor antigenicity in a model of immunotherapy.

    PubMed

    Pillai, Nikhil; Craig, Morgan; Dokoumetzidis, Aristeidis; Schwartz, Sorell L; Bies, Robert; Freedman, Immanuel

    2018-06-19

    In mathematical pharmacology, models are constructed to confer a robust method for optimizing treatment. The predictive capability of pharmacological models depends heavily on the ability to track the system and to accurately determine parameters with reference to the sensitivity in projected outcomes. To closely track chaotic systems, one may choose to apply chaos synchronization. An advantageous byproduct of this methodology is the ability to quantify model parameters. In this paper, we illustrate the use of chaos synchronization combined with Nelder-Mead search to estimate parameters of the well-known Kirschner-Panetta model of IL-2 immunotherapy from noisy data. Chaos synchronization with Nelder-Mead search is shown to provide more accurate and reliable estimates than Nelder-Mead search based on an extended least squares (ELS) objective function. Our results underline the strength of this approach to parameter estimation and provide a broader framework of parameter identification for nonlinear models in pharmacology. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Protocols for Teaching Students How to Search for, Discover, and Evaluate Innovations

    ERIC Educational Resources Information Center

    Norton, William I., Jr.; Hale, Dena H.

    2011-01-01

    The authors introduce and develop protocols to guide aspiring entrepreneurs' behaviors in searching for and discovering innovative ideas that may have commercial potential. Systematic search has emerged as a theory-based, prescriptive framework to guide innovative behavior. Grounded in Fiet's theory of search and discovery, this article provides…

  15. Search Engines for Tomorrow's Scholars, Part Two

    ERIC Educational Resources Information Center

    Fagan, Jody Condit

    2012-01-01

    This two-part article considers how well some of today's search tools support scholars' work. The first part of the article reviewed Google Scholar and Microsoft Academic Search using a modified version of Carole L. Palmer, Lauren C. Teffeau, and Carrier M. Pirmann's framework (2009). Microsoft Academic Search is a strong contender when…

  16. A scoping review of scoping reviews: advancing the approach and enhancing the consistency

    PubMed Central

    Pham, Mai T; Rajić, Andrijana; Greig, Judy D; Sargeant, Jan M; Papadopoulos, Andrew; McEwen, Scott A

    2014-01-01

    Background The scoping review has become an increasingly popular approach for synthesizing research evidence. It is a relatively new approach for which a universal study definition or definitive procedure has not been established. The purpose of this scoping review was to provide an overview of scoping reviews in the literature. Methods A scoping review was conducted using the Arksey and O'Malley framework. A search was conducted in four bibliographic databases and the gray literature to identify scoping review studies. Review selection and characterization were performed by two independent reviewers using pretested forms. Results The search identified 344 scoping reviews published from 1999 to October 2012. The reviews varied in terms of purpose, methodology, and detail of reporting. Nearly three-quarter of reviews (74.1%) addressed a health topic. Study completion times varied from 2 weeks to 20 months, and 51% utilized a published methodological framework. Quality assessment of included studies was infrequently performed (22.38%). Conclusions Scoping reviews are a relatively new but increasingly common approach for mapping broad topics. Because of variability in their conduct, there is a need for their methodological standardization to ensure the utility and strength of evidence. © 2014 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. PMID:26052958

  17. A method for detecting and characterizing outbreaks of infectious disease from clinical reports.

    PubMed

    Cooper, Gregory F; Villamarin, Ricardo; Rich Tsui, Fu-Chiang; Millett, Nicholas; Espino, Jeremy U; Wagner, Michael M

    2015-02-01

    Outbreaks of infectious disease can pose a significant threat to human health. Thus, detecting and characterizing outbreaks quickly and accurately remains an important problem. This paper describes a Bayesian framework that links clinical diagnosis of individuals in a population to epidemiological modeling of disease outbreaks in the population. Computer-based diagnosis of individuals who seek healthcare is used to guide the search for epidemiological models of population disease that explain the pattern of diagnoses well. We applied this framework to develop a system that detects influenza outbreaks from emergency department (ED) reports. The system diagnoses influenza in individuals probabilistically from evidence in ED reports that are extracted using natural language processing. These diagnoses guide the search for epidemiological models of influenza that explain the pattern of diagnoses well. Those epidemiological models with a high posterior probability determine the most likely outbreaks of specific diseases; the models are also used to characterize properties of an outbreak, such as its expected peak day and estimated size. We evaluated the method using both simulated data and data from a real influenza outbreak. The results provide support that the approach can detect and characterize outbreaks early and well enough to be valuable. We describe several extensions to the approach that appear promising. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. A Method for Detecting and Characterizing Outbreaks of Infectious Disease from Clinical Reports

    PubMed Central

    Cooper, Gregory F.; Villamarin, Ricardo; Tsui, Fu-Chiang (Rich); Millett, Nicholas; Espino, Jeremy U.; Wagner, Michael M.

    2014-01-01

    Outbreaks of infectious disease can pose a significant threat to human health. Thus, detecting and characterizing outbreaks quickly and accurately remains an important problem. This paper describes a Bayesian framework that links clinical diagnosis of individuals in a population to epidemiological modeling of disease outbreaks in the population. Computer-based diagnosis of individuals who seek healthcare is used to guide the search for epidemiological models of population disease that explain the pattern of diagnoses well. We applied this framework to develop a system that detects influenza outbreaks from emergency department (ED) reports. The system diagnoses influenza in individuals probabilistically from evidence in ED reports that are extracted using natural language processing. These diagnoses guide the search for epidemiological models of influenza that explain the pattern of diagnoses well. Those epidemiological models with a high posterior probability determine the most likely outbreaks of specific diseases; the models are also used to characterize properties of an outbreak, such as its expected peak day and estimated size. We evaluated the method using both simulated data and data from a real influenza outbreak. The results provide support that the approach can detect and characterize outbreaks early and well enough to be valuable. We describe several extensions to the approach that appear promising. PMID:25181466

  19. A predictive machine learning approach for microstructure optimization and materials design

    DOE PAGES

    Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; ...

    2015-06-23

    This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniquenessmore » of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. In conclusion, experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.« less

  20. An Ensemble Framework Coping with Instability in the Gene Selection Process.

    PubMed

    Castellanos-Garzón, José A; Ramos, Juan; López-Sánchez, Daniel; de Paz, Juan F; Corchado, Juan M

    2018-03-01

    This paper proposes an ensemble framework for gene selection, which is aimed at addressing instability problems presented in the gene filtering task. The complex process of gene selection from gene expression data faces different instability problems from the informative gene subsets found by different filter methods. This makes the identification of significant genes by the experts difficult. The instability of results can come from filter methods, gene classifier methods, different datasets of the same disease and multiple valid groups of biomarkers. Even though there is a wide number of proposals, the complexity imposed by this problem remains a challenge today. This work proposes a framework involving five stages of gene filtering to discover biomarkers for diagnosis and classification tasks. This framework performs a process of stable feature selection, facing the problems above and, thus, providing a more suitable and reliable solution for clinical and research purposes. Our proposal involves a process of multistage gene filtering, in which several ensemble strategies for gene selection were added in such a way that different classifiers simultaneously assess gene subsets to face instability. Firstly, we apply an ensemble of recent gene selection methods to obtain diversity in the genes found (stability according to filter methods). Next, we apply an ensemble of known classifiers to filter genes relevant to all classifiers at a time (stability according to classification methods). The achieved results were evaluated in two different datasets of the same disease (pancreatic ductal adenocarcinoma), in search of stability according to the disease, for which promising results were achieved.

  1. Practice nursing in Australia: A review of education and career pathways

    PubMed Central

    Parker, Rhian M; Keleher, Helen M; Francis, Karen; Abdulwadud, Omar

    2009-01-01

    Background Nurses in Australia are often not educated in their pre registration years to meet the needs of primary care. Careers in primary care may not be as attractive to nursing graduates as high-tech settings such as intensive or acute care. Yet, it is in primary care that increasingly complex health problems are managed. The Australian government has invested in incentives for general practices to employ practice nurses. However, no policy framework has been developed for practice nursing to support career development and post-registration education and training programs are developed in an ad hoc manner and are not underpinned by core professional competencies. This paper reports on a systematic review undertaken to establish the available evidence on education models and career pathways with a view to enhancing recruitment and retention of practice nurses in primary care in Australia. Methods Search terms describing education models, career pathways and policy associated with primary care (practice) nursing were established. These search terms were used to search electronic databases. The search strategy identified 1394 citations of which 408 addressed one or more of the key search terms on policy, education and career pathways. Grey literature from the UK and New Zealand internet sites were sourced and examined. The UK and New Zealand Internet sites were selected because they have well established and advanced developments in education and career pathways for practice nurses. Two reviewers examined titles, abstracts and studies, based on inclusion and exclusion criteria. Disagreement between the reviewers was resolved by consensus or by a third reviewer. Results Significant advances have been made in New Zealand and the UK towards strengthening frameworks for primary care nursing education and career pathways. However, in Australia there is no policy at national level prepare nurses to work in primary care sector and no framework for education or career pathways for nurses working in that sector. Conclusion There is a need for national training standards and a process of accreditation for practice nursing in Australia to support the development of a responsive and sustainable nursing workforce in primary care and to provide quality education and career pathways. PMID:19473493

  2. Structure Refinement of Protein Low Resolution Models Using the GNEIMO Constrained Dynamics Method

    PubMed Central

    Park, In-Hee; Gangupomu, Vamshi; Wagner, Jeffrey; Jain, Abhinandan; Vaidehi, Nagara-jan

    2012-01-01

    The challenge in protein structure prediction using homology modeling is the lack of reliable methods to refine the low resolution homology models. Unconstrained all-atom molecular dynamics (MD) does not serve well for structure refinement due to its limited conformational search. We have developed and tested the constrained MD method, based on the Generalized Newton-Euler Inverse Mass Operator (GNEIMO) algorithm for protein structure refinement. In this method, the high-frequency degrees of freedom are replaced with hard holonomic constraints and a protein is modeled as a collection of rigid body clusters connected by flexible torsional hinges. This allows larger integration time steps and enhances the conformational search space. In this work, we have demonstrated the use of a constraint free GNEIMO method for protein structure refinement that starts from low-resolution decoy sets derived from homology methods. In the eight proteins with three decoys for each, we observed an improvement of ~2 Å in the RMSD to the known experimental structures of these proteins. The GNEIMO method also showed enrichment in the population density of native-like conformations. In addition, we demonstrated structural refinement using a “Freeze and Thaw” clustering scheme with the GNEIMO framework as a viable tool for enhancing localized conformational search. We have derived a robust protocol based on the GNEIMO replica exchange method for protein structure refinement that can be readily extended to other proteins and possibly applicable for high throughput protein structure refinement. PMID:22260550

  3. A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms

    PubMed Central

    Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine

    2010-01-01

    Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise. PMID:22163672

  4. A rigid image registration based on the nonsubsampled contourlet transform and genetic algorithms.

    PubMed

    Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine

    2010-01-01

    Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise.

  5. Design and Implementation of Distributed Crawler System Based on Scrapy

    NASA Astrophysics Data System (ADS)

    Fan, Yuhao

    2018-01-01

    At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.

  6. Developing a Shuffled Complex-Self Adaptive Hybrid Evolution (SC-SAHEL) Framework for Water Resources Management and Water-Energy System Optimization

    NASA Astrophysics Data System (ADS)

    Rahnamay Naeini, M.; Sadegh, M.; AghaKouchak, A.; Hsu, K. L.; Sorooshian, S.; Yang, T.

    2017-12-01

    Meta-Heuristic optimization algorithms have gained a great deal of attention in a wide variety of fields. Simplicity and flexibility of these algorithms, along with their robustness, make them attractive tools for solving optimization problems. Different optimization methods, however, hold algorithm-specific strengths and limitations. Performance of each individual algorithm obeys the "No-Free-Lunch" theorem, which means a single algorithm cannot consistently outperform all possible optimization problems over a variety of problems. From users' perspective, it is a tedious process to compare, validate, and select the best-performing algorithm for a specific problem or a set of test cases. In this study, we introduce a new hybrid optimization framework, entitled Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL), which combines the strengths of different evolutionary algorithms (EAs) in a parallel computing scheme, and allows users to select the most suitable algorithm tailored to the problem at hand. The concept of SC-SAHEL is to execute different EAs as separate parallel search cores, and let all participating EAs to compete during the course of the search. The newly developed SC-SAHEL algorithm is designed to automatically select, the best performing algorithm for the given optimization problem. This algorithm is rigorously effective in finding the global optimum for several strenuous benchmark test functions, and computationally efficient as compared to individual EAs. We benchmark the proposed SC-SAHEL algorithm over 29 conceptual test functions, and two real-world case studies - one hydropower reservoir model and one hydrological model (SAC-SMA). Results show that the proposed framework outperforms individual EAs in an absolute majority of the test problems, and can provide competitive results to the fittest EA algorithm with more comprehensive information during the search. The proposed framework is also flexible for merging additional EAs, boundary-handling techniques, and sampling schemes, and has good potential to be used in Water-Energy system optimal operation and management.

  7. A cloud-based framework for large-scale traditional Chinese medical record retrieval.

    PubMed

    Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin

    2018-01-01

    Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.

  8. Investigation of methods to search for the boundaries on the image and their use on lung hardware of methods finding saliency map

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.

    2015-05-01

    This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.

  9. Theories, models and frameworks used in capacity building interventions relevant to public health: a systematic review.

    PubMed

    Bergeron, Kim; Abdi, Samiya; DeCorby, Kara; Mensah, Gloria; Rempel, Benjamin; Manson, Heather

    2017-11-28

    There is limited research on capacity building interventions that include theoretical foundations. The purpose of this systematic review is to identify underlying theories, models and frameworks used to support capacity building interventions relevant to public health practice. The aim is to inform and improve capacity building practices and services offered by public health organizations. Four search strategies were used: 1) electronic database searching; 2) reference lists of included papers; 3) key informant consultation; and 4) grey literature searching. Inclusion and exclusion criteria are outlined with included papers focusing on capacity building, learning plans, professional development plans in combination with tools, resources, processes, procedures, steps, model, framework, guideline, described in a public health or healthcare setting, or non-government, government, or community organizations as they relate to healthcare, and explicitly or implicitly mention a theory, model and/or framework that grounds the type of capacity building approach developed. Quality assessment were performed on all included articles. Data analysis included a process for synthesizing, analyzing and presenting descriptive summaries, categorizing theoretical foundations according to which theory, model and/or framework was used and whether or not the theory, model or framework was implied or explicitly identified. Nineteen articles were included in this review. A total of 28 theories, models and frameworks were identified. Of this number, two theories (Diffusion of Innovations and Transformational Learning), two models (Ecological and Interactive Systems Framework for Dissemination and Implementation) and one framework (Bloom's Taxonomy of Learning) were identified as the most frequently cited. This review identifies specific theories, models and frameworks to support capacity building interventions relevant to public health organizations. It provides public health practitioners with a menu of potentially usable theories, models and frameworks to support capacity building efforts. The findings also support the need for the use of theories, models or frameworks to be intentional, explicitly identified, referenced and for it to be clearly outlined how they were applied to the capacity building intervention.

  10. Qualitative systematic reviews of treatment burden in stroke, heart failure and diabetes - methodological challenges and solutions.

    PubMed

    Gallacher, Katie; Jani, Bhautesh; Morrison, Deborah; Macdonald, Sara; Blane, David; Erwin, Patricia; May, Carl R; Montori, Victor M; Eton, David T; Smith, Fiona; Batty, G David; Batty, David G; Mair, Frances S

    2013-01-28

    Treatment burden can be defined as the self-care practices that patients with chronic illness must perform to respond to the requirements of their healthcare providers, as well as the impact that these practices have on patient functioning and well being. Increasing levels of treatment burden may lead to suboptimal adherence and negative outcomes. Systematic review of the qualitative literature is a useful method for exploring the patient experience of care, in this case the experience of treatment burden. There is no consensus on methods for qualitative systematic review. This paper describes the methodology used for qualitative systematic reviews of the treatment burdens identified in three different common chronic conditions, using stroke as our exemplar. Qualitative studies in peer reviewed journals seeking to understand the patient experience of stroke management were sought. Limitations of English language and year of publication 2000 onwards were set. An exhaustive search strategy was employed, consisting of a scoping search, database searches (Scopus, CINAHL, Embase, Medline & PsycINFO) and reference, footnote and citation searching. Papers were screened, data extracted, quality appraised and analysed by two individuals, with a third party for disagreements. Data analysis was carried out using a coding framework underpinned by Normalization Process Theory (NPT). A total of 4364 papers were identified, 54 were included in the review. Of these, 51 (94%) were retrieved from our database search. Methodological issues included: creating an appropriate search strategy; investigating a topic not previously conceptualised; sorting through irrelevant data within papers; the quality appraisal of qualitative research; and the use of NPT as a novel method of data analysis, shown to be a useful method for the purposes of this review. The creation of our search strategy may be of particular interest to other researchers carrying out synthesis of qualitative studies. Importantly, the successful use of NPT to inform a coding frame for data analysis involving qualitative data that describes processes relating to self management highlights the potential of a new method for analyses of qualitative data within systematic reviews.

  11. An extension of the directed search domain algorithm to bilevel optimization

    NASA Astrophysics Data System (ADS)

    Wang, Kaiqiang; Utyuzhnikov, Sergey V.

    2017-08-01

    A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.

  12. The Sharing Experimental Animal Resources, Coordinating Holdings (SEARCH) Framework: Encouraging Reduction, Replacement, and Refinement in Animal Research.

    PubMed

    Morrissey, Bethny; Blyth, Karen; Carter, Phil; Chelala, Claude; Jones, Louise; Holen, Ingunn; Speirs, Valerie

    2017-01-01

    While significant medical breakthroughs have been achieved through using animal models, our experience shows that often there is surplus material remaining that is frequently never revisited but could be put to good use by other scientists. Recognising that most scientists are willing to share this material on a collaborative basis, it makes economic, ethical, and academic sense to explore the option to utilise this precious resource before generating new/additional animal models and associated samples. To bring together those requiring animal tissue and those holding this type of archival material, we have devised a framework called Sharing Experimental Animal Resources, Coordinating Holdings (SEARCH) with the aim of making remaining material derived from animal studies in biomedical research more visible and accessible to the scientific community. We encourage journals, funding bodies, and scientists to unite in promoting a new way of approaching animal research by adopting the SEARCH framework.

  13. The Sharing Experimental Animal Resources, Coordinating Holdings (SEARCH) Framework: Encouraging Reduction, Replacement, and Refinement in Animal Research

    PubMed Central

    Morrissey, Bethny; Blyth, Karen; Carter, Phil; Chelala, Claude; Jones, Louise; Holen, Ingunn; Speirs, Valerie

    2017-01-01

    While significant medical breakthroughs have been achieved through using animal models, our experience shows that often there is surplus material remaining that is frequently never revisited but could be put to good use by other scientists. Recognising that most scientists are willing to share this material on a collaborative basis, it makes economic, ethical, and academic sense to explore the option to utilise this precious resource before generating new/additional animal models and associated samples. To bring together those requiring animal tissue and those holding this type of archival material, we have devised a framework called Sharing Experimental Animal Resources, Coordinating Holdings (SEARCH) with the aim of making remaining material derived from animal studies in biomedical research more visible and accessible to the scientific community. We encourage journals, funding bodies, and scientists to unite in promoting a new way of approaching animal research by adopting the SEARCH framework. PMID:28081116

  14. RooStatsCms: A tool for analysis modelling, combination and statistical studies

    NASA Astrophysics Data System (ADS)

    Piparo, D.; Schott, G.; Quast, G.

    2010-04-01

    RooStatsCms is an object oriented statistical framework based on the RooFit technology. Its scope is to allow the modelling, statistical analysis and combination of multiple search channels for new phenomena in High Energy Physics. It provides a variety of methods described in literature implemented as classes, whose design is oriented to the execution of multiple CPU intensive jobs on batch systems or on the Grid.

  15. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    PubMed Central

    Arshad, Sannia; Rho, Seungmin

    2014-01-01

    We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes. PMID:25295302

  16. Robust framework to combine diverse classifiers assigning distributed confidence to individual classifiers at class level.

    PubMed

    Khalid, Shehzad; Arshad, Sannia; Jabbar, Sohail; Rho, Seungmin

    2014-01-01

    We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.

  17. Unsupervised Deep Hashing With Pseudo Labels for Scalable Image Retrieval.

    PubMed

    Zhang, Haofeng; Liu, Li; Long, Yang; Shao, Ling

    2018-04-01

    In order to achieve efficient similarity searching, hash functions are designed to encode images into low-dimensional binary codes with the constraint that similar features will have a short distance in the projected Hamming space. Recently, deep learning-based methods have become more popular, and outperform traditional non-deep methods. However, without label information, most state-of-the-art unsupervised deep hashing (DH) algorithms suffer from severe performance degradation for unsupervised scenarios. One of the main reasons is that the ad-hoc encoding process cannot properly capture the visual feature distribution. In this paper, we propose a novel unsupervised framework that has two main contributions: 1) we convert the unsupervised DH model into supervised by discovering pseudo labels; 2) the framework unifies likelihood maximization, mutual information maximization, and quantization error minimization so that the pseudo labels can maximumly preserve the distribution of visual features. Extensive experiments on three popular data sets demonstrate the advantages of the proposed method, which leads to significant performance improvement over the state-of-the-art unsupervised hashing algorithms.

  18. Research on bulbous bow optimization based on the improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng-long; Zhang, Bao-ji; Tezdogan, Tahsin; Xu, Le-ping; Lai, Yu-yang

    2017-08-01

    In order to reduce the total resistance of a hull, an optimization framework for the bulbous bow optimization was presented. The total resistance in calm water was selected as the objective function, and the overset mesh technique was used for mesh generation. RANS method was used to calculate the total resistance of the hull. In order to improve the efficiency and smoothness of the geometric reconstruction, the arbitrary shape deformation (ASD) technique was introduced to change the shape of the bulbous bow. To improve the global search ability of the particle swarm optimization (PSO) algorithm, an improved particle swarm optimization (IPSO) algorithm was proposed to set up the optimization model. After a series of optimization analyses, the optimal hull form was found. It can be concluded that the simulation based design framework built in this paper is a promising method for bulbous bow optimization.

  19. Treatment of antisocial personality disorder: Development of a practice focused framework.

    PubMed

    van den Bosch, L M C; Rijckmans, M J N; Decoene, S; Chapman, A L

    There is little to no evidence of effective treatment methods for patients with an antisocial personality disorder (ASPD). One of the reasons could be the fact that they are often excluded from mental healthcare and thus from studies. A treatment framework based on 'state of the art' methods and best practices, offering guidelines on the treatment of ASP and possibilities for more systematical research, is urgently needed. This research involved a literature search and an international Delphi-study (N = 61 experts in research, management and clinical practice focused on ASPD). The results suggested important preconditions with regard to organization of care, healthcare workers and therapy. Conclusions are that there are many ways to coordinate effective treatment and management and work toward the increased availability of evidence based care for persons with ASPD. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Health Impact Assessment as a framework for evaluation of local complex projects.

    PubMed

    Heath, Lucy

    2007-07-01

    Health impact assessment (HIA) has been used to predict effects of a local parenting strategy and develop an evaluation framework. Methods used included literature searches, inequalities profiling, interviews with key informants and a review of available cost data. Four priority areas, where parenting can potentially impact, were identified: education, antisocial behaviour, lifestyle choices and mental health. The results concerning mental health are presented here. Improving the quality of parenting can impact on a child's mental health. The costs relating to the mental health outcomes are high and parenting is a cost-effective method to address the family dynamics that impact on this. Intermediary indicators, including clear boundaries, time spent as a family and parental involvement can be used to evaluate the intervention in the short-term, although there are difficulties in their measurement. The HIA process can improve cross-sectorial working, increased community participation and keep inequalities on the agenda.

  1. Heuristic Identification of Biological Architectures for Simulating Complex Hierarchical Genetic Interactions

    PubMed Central

    Moore, Jason H; Amos, Ryan; Kiralis, Jeff; Andrews, Peter C

    2015-01-01

    Simulation plays an essential role in the development of new computational and statistical methods for the genetic analysis of complex traits. Most simulations start with a statistical model using methods such as linear or logistic regression that specify the relationship between genotype and phenotype. This is appealing due to its simplicity and because these statistical methods are commonly used in genetic analysis. It is our working hypothesis that simulations need to move beyond simple statistical models to more realistically represent the biological complexity of genetic architecture. The goal of the present study was to develop a prototype genotype–phenotype simulation method and software that are capable of simulating complex genetic effects within the context of a hierarchical biology-based framework. Specifically, our goal is to simulate multilocus epistasis or gene–gene interaction where the genetic variants are organized within the framework of one or more genes, their regulatory regions and other regulatory loci. We introduce here the Heuristic Identification of Biological Architectures for simulating Complex Hierarchical Interactions (HIBACHI) method and prototype software for simulating data in this manner. This approach combines a biological hierarchy, a flexible mathematical framework, a liability threshold model for defining disease endpoints, and a heuristic search strategy for identifying high-order epistatic models of disease susceptibility. We provide several simulation examples using genetic models exhibiting independent main effects and three-way epistatic effects. PMID:25395175

  2. A dynamic multiarmed bandit-gene expression programming hyper-heuristic for combinatorial optimization problems.

    PubMed

    Sabar, Nasser R; Ayob, Masri; Kendall, Graham; Qu, Rong

    2015-02-01

    Hyper-heuristics are search methodologies that aim to provide high-quality solutions across a wide variety of problem domains, rather than developing tailor-made methodologies for each problem instance/domain. A traditional hyper-heuristic framework has two levels, namely, the high level strategy (heuristic selection mechanism and the acceptance criterion) and low level heuristics (a set of problem specific heuristics). Due to the different landscape structures of different problem instances, the high level strategy plays an important role in the design of a hyper-heuristic framework. In this paper, we propose a new high level strategy for a hyper-heuristic framework. The proposed high-level strategy utilizes a dynamic multiarmed bandit-extreme value-based reward as an online heuristic selection mechanism to select the appropriate heuristic to be applied at each iteration. In addition, we propose a gene expression programming framework to automatically generate the acceptance criterion for each problem instance, instead of using human-designed criteria. Two well-known, and very different, combinatorial optimization problems, one static (exam timetabling) and one dynamic (dynamic vehicle routing) are used to demonstrate the generality of the proposed framework. Compared with state-of-the-art hyper-heuristics and other bespoke methods, empirical results demonstrate that the proposed framework is able to generalize well across both domains. We obtain competitive, if not better results, when compared to the best known results obtained from other methods that have been presented in the scientific literature. We also compare our approach against the recently released hyper-heuristic competition test suite. We again demonstrate the generality of our approach when we compare against other methods that have utilized the same six benchmark datasets from this test suite.

  3. Neuroimaging studies of cognitive remediation in schizophrenia: A systematic and critical review

    PubMed Central

    Penadés, Rafael; González-Rodríguez, Alexandre; Catalán, Rosa; Segura, Bàrbara; Bernardo, Miquel; Junqué, Carme

    2017-01-01

    AIM To examine the effects of cognitive remediation therapies on brain functioning through neuroimaging procedures in patients with schizophrenia. METHODS A systematic, computerised literature search was conducted in the PubMed/Medline and PsychInfo databases. The search was performed through February 2016 without any restrictions on language or publication date. The search was performed using the following search terms: [(“cogniti*” and “remediation” or “training” or “enhancement”) and (“fMRI” or “MRI” or “PET” or “SPECT”) and (schizophrenia or schiz*)]. The search was accompanied by a manual online search and a review of the references from each of the papers selected, and those papers fulfilling our inclusion criteria were also included. RESULTS A total of 101 studies were found, but only 18 of them fulfilled the inclusion criteria. These studies indicated that cognitive remediation improves brain activation in neuroimaging studies. The most commonly reported changes were those that involved the prefrontal and thalamic regions. Those findings are in agreement with the hypofrontality hypothesis, which proposes that frontal hypoactivation is the underlying mechanism of cognitive impairments in schizophrenia. Nonetheless, great heterogeneity among the studies was found. They presented different hypotheses, different results and different findings. The results of more recent studies interpreted cognitive recovery within broader frameworks, namely, as amelioration of the efficiency of different networks. Furthermore, advances in neuroimaging methodologies, such as the use of whole-brain analysis, tractography, graph analysis, and other sophisticated methodologies of data processing, might be conditioning the interpretation of results and generating new theoretical frameworks. Additionally, structural changes were described in both the grey and white matter, suggesting a neuroprotective effect of cognitive remediation. Cognitive, functional and structural improvements tended to be positively correlated. CONCLUSION Neuroimaging studies of cognitive remediation in patients with schizophrenia suggest a positive effect on brain functioning in terms of the functional reorganisation of neural networks. PMID:28401047

  4. (Quickly) Testing the Tester via Path Coverage

    NASA Technical Reports Server (NTRS)

    Groce, Alex

    2009-01-01

    The configuration complexity and code size of an automated testing framework may grow to a point that the tester itself becomes a significant software artifact, prone to poor configuration and implementation errors. Unfortunately, testing the tester by using old versions of the software under test (SUT) may be impractical or impossible: test framework changes may have been motivated by interface changes in the tested system, or fault detection may become too expensive in terms of computing time to justify running until errors are detected on older versions of the software. We propose the use of path coverage measures as a "quick and dirty" method for detecting many faults in complex test frameworks. We also note the possibility of using techniques developed to diversify state-space searches in model checking to diversify test focus, and an associated classification of tester changes into focus-changing and non-focus-changing modifications.

  5. The relationship and effects of golf on physical and mental health: a scoping review protocol.

    PubMed

    Murray, A; Daines, L; Archibald, D; Hawkes, R; Grant, L; Mutrie, N

    2016-06-01

    Golf is a sport played in 206 countries worldwide by over 50 million people. It is possible that participation in golf, which is a form of physical activity, may be associated with effects on longevity, the cardiovascular, metabolic and musculoskeletal systems, as well as on mental health and well-being. We outline our scoping review protocol to examine the relationships and effects of golf on physical and mental health. Best practice methodological frameworks suggested by Arksey and O'Malley, Levac et al and the Joanna Briggs Institute will serve as our guide, providing clarity and rigour. A scoping review provides a framework to (1) map the key concepts and evidence, (2) summarise and disseminate existing research findings to practitioners and policymakers and (3) identify gaps in the existing research. A three-step search strategy will identify reviews as well as original research, published and grey literature. An initial search will identify suitable search terms, followed by a search using keyword and index terms. Two reviewers will independently screen identified studies for final inclusion. We will map key concepts and evidence, and disseminate existing research findings to practitioners and policymakers through peer-reviewed and non-peer reviewed publications, conferences and in-person communications. We will identify priorities for further study. This method may prove useful to examine the relationships and effects of other sports on health. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches.

    PubMed

    Booth, Andrew; Noyes, Jane; Flemming, Kate; Gerhardus, Ansgar; Wahlster, Philip; van der Wilt, Gert Jan; Mozygemba, Kati; Refolo, Pietro; Sacchini, Dario; Tummers, Marcia; Rehfuess, Eva

    2018-07-01

    To compare and contrast different methods of qualitative evidence synthesis (QES) against criteria identified from the literature and to map their attributes to inform selection of the most appropriate QES method to answer research questions addressed by qualitative research. Electronic databases, citation searching, and a study register were used to identify studies reporting QES methods. Attributes compiled from 26 methodological papers (2001-2014) were used as a framework for data extraction. Data were extracted into summary tables by one reviewer and then considered within the author team. We identified seven considerations determining choice of methods from the methodological literature, encapsulated within the mnemonic Review question-Epistemology-Time/Timescale-Resources-Expertise-Audience and purpose-Type of data. We mapped 15 different published QES methods against these seven criteria. The final framework focuses on stand-alone QES methods but may also hold potential when integrating quantitative and qualitative data. These findings offer a contemporary perspective as a conceptual basis for future empirical investigation of the advantages and disadvantages of different methods of QES. It is hoped that this will inform appropriate selection of QES approaches. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Semantic Search of Web Services

    ERIC Educational Resources Information Center

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  8. SHIFT: Shared Information Framework and Technology Concept

    DTIC Science & Technology

    2009-02-01

    easy to get an overview of all those tasks what is happening around each mission. Federated Search allows actors to search multiple data sources...is sent to every individual database in the portal or federated search list. 4.7. SHIFT and the world around it The idea in SHIFT is to enable

  9. STEM Education Related Dissertation Abstracts: A Bounded Qualitative Meta-Study

    ERIC Educational Resources Information Center

    Banning, James; Folkestad, James E.

    2012-01-01

    This article utilizes a bounded qualitative meta-study framework to examine the 101 dissertation abstracts found by searching the ProQuest Dissertation and Theses[TM] digital database for dissertations abstracts from 1990 through 2010 using the search terms education, science, technology, engineer, and STEM/SMET. Professional search librarians…

  10. Evaluating a federated medical search engine: tailoring the methodology and reporting the evaluation outcomes.

    PubMed

    Saparova, D; Belden, J; Williams, J; Richardson, B; Schuster, K

    2014-01-01

    Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants' expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for system improvement.

  11. Framework for computationally efficient optimal irrigation scheduling using ant colony optimization

    USDA-ARS?s Scientific Manuscript database

    A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...

  12. Narrative review of frameworks for translating research evidence into policy and practice.

    PubMed

    Milat, Andrew J; Li, Ben

    2017-02-15

    A significant challenge in research translation is that interested parties interpret and apply the associated terms and conceptual frameworks in different ways. The purpose of this review was to: a) examine different research translation frameworks; b) examine the similarities and differences between the frameworks; and c) identify key strengths and weaknesses of the models when they are applied in practice. The review involved a keyword search of PubMed. The search string was (translational research OR knowledge translation OR evidence to practice) AND (framework OR model OR theory) AND (public health OR health promotion OR medicine). Included studies were published in English between January 1990 and December 2014, and described frameworks, models or theories associated with research translation. The final review included 98 papers, and 41 different frameworks and models were identified. The most frequently applied knowledge translation framework in the literature was RE-AIM, followed by the knowledge translation continuum or 'T' models, the Knowledge to Action framework, the PARiHS framework, evidence based public health models, and the stages of research and evaluation model. The models identified in this review stem from different fields, including implementation science, basic and medical sciences, health services research and public health, and propose different but related pathways to closing the research-practice gap.

  13. Lessons Learned from Development of De-identification System for Biomedical Research in a Korean Tertiary Hospital

    PubMed Central

    Shin, Soo-Yong; Lyu, Yongman; Shin, Yongdon; Choi, Hyo Joung; Park, Jihyun; Kim, Woo-Sung

    2013-01-01

    Objectives The Korean government has enacted two laws, namely, the Personal Information Protection Act and the Bioethics and Safety Act to prevent the unauthorized use of medical information. To protect patients' privacy by complying with governmental regulations and improve the convenience of research, Asan Medical Center has been developing a de-identification system for biomedical research. Methods We reviewed Korean regulations to define the scope of the de-identification methods and well-known previous biomedical research platforms to extract the functionalities of the systems. Based on these review results, we implemented necessary programs based on the Asan Medical Center Information System framework which was built using the Microsoft. NET Framework and C#. Results The developed de-identification system comprises three main components: a de-identification tool, a search tool, and a chart review tool. The de-identification tool can substitute a randomly assigned research ID for a hospital patient ID, remove the identifiers in the structured format, and mask them in the unstructured format, i.e., texts. This tool achieved 98.14% precision and 97.39% recall for 6,520 clinical notes. The search tool can find the number of patients which satisfies given search criteria. The chart review tool can provide de-identified patient's clinical data for review purposes. Conclusions We found that a clinical data warehouse was essential for successful implementation of the de-identification system, and this system should be tightly linked to an electronic Institutional Review Board system for easy operation of honest brokers. Additionally, we found that a secure cloud environment could be adopted to protect patients' privacy more thoroughly. PMID:23882415

  14. A novel water quality data analysis framework based on time-series data mining.

    PubMed

    Deng, Weihui; Wang, Guoyin

    2017-07-01

    The rapid development of time-series data mining provides an emerging method for water resource management research. In this paper, based on the time-series data mining methodology, we propose a novel and general analysis framework for water quality time-series data. It consists of two parts: implementation components and common tasks of time-series data mining in water quality data. In the first part, we propose to granulate the time series into several two-dimensional normal clouds and calculate the similarities in the granulated level. On the basis of the similarity matrix, the similarity search, anomaly detection, and pattern discovery tasks in the water quality time-series instance dataset can be easily implemented in the second part. We present a case study of this analysis framework on weekly Dissolve Oxygen time-series data collected from five monitoring stations on the upper reaches of Yangtze River, China. It discovered the relationship of water quality in the mainstream and tributary as well as the main changing patterns of DO. The experimental results show that the proposed analysis framework is a feasible and efficient method to mine the hidden and valuable knowledge from water quality historical time-series data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Toward a Mixed-Methods Research Approach to Content Analysis in The Digital Age: The Combined Content-Analysis Model and its Applications to Health Care Twitter Feeds

    PubMed Central

    Hamad, Eradah O; Savundranayagam, Marie Y; Holmes, Jeffrey D; Kinsella, Elizabeth Anne

    2016-01-01

    Background Twitter’s 140-character microblog posts are increasingly used to access information and facilitate discussions among health care professionals and between patients with chronic conditions and their caregivers. Recently, efforts have emerged to investigate the content of health care-related posts on Twitter. This marks a new area for researchers to investigate and apply content analysis (CA). In current infodemiology, infoveillance and digital disease detection research initiatives, quantitative and qualitative Twitter data are often combined, and there are no clear guidelines for researchers to follow when collecting and evaluating Twitter-driven content. Objective The aim of this study was to identify studies on health care and social media that used Twitter feeds as a primary data source and CA as an analysis technique. We evaluated the resulting 18 studies based on a narrative review of previous methodological studies and textbooks to determine the criteria and main features of quantitative and qualitative CA. We then used the key features of CA and mixed-methods research designs to propose the combined content-analysis (CCA) model as a solid research framework for designing, conducting, and evaluating investigations of Twitter-driven content. Methods We conducted a PubMed search to collect studies published between 2010 and 2014 that used CA to analyze health care-related tweets. The PubMed search and reference list checks of selected papers identified 21 papers. We excluded 3 papers and further analyzed 18. Results Results suggest that the methods used in these studies were not purely quantitative or qualitative, and the mixed-methods design was not explicitly chosen for data collection and analysis. A solid research framework is needed for researchers who intend to analyze Twitter data through the use of CA. Conclusions We propose the CCA model as a useful framework that provides a straightforward approach to guide Twitter-driven studies and that adds rigor to health care social media investigations. We provide suggestions for the use of the CCA model in elder care-related contexts. PMID:26957477

  16. Interpretation of searches for supersymmetry with simplified models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.

    The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98 inverse femtobarns. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrainmore » other theoretical models and to compare different supersymmetry-inspired analyses.« less

  17. Evidence in the learning organization

    PubMed Central

    Crites, Gerald E; McNamara, Megan C; Akl, Elie A; Richardson, W Scott; Umscheid, Craig A; Nishikawa, James

    2009-01-01

    Background Organizational leaders in business and medicine have been experiencing a similar dilemma: how to ensure that their organizational members are adopting work innovations in a timely fashion. Organizational leaders in healthcare have attempted to resolve this dilemma by offering specific solutions, such as evidence-based medicine (EBM), but organizations are still not systematically adopting evidence-based practice innovations as rapidly as expected by policy-makers (the knowing-doing gap problem). Some business leaders have adopted a systems-based perspective, called the learning organization (LO), to address a similar dilemma. Three years ago, the Society of General Internal Medicine's Evidence-based Medicine Task Force began an inquiry to integrate the EBM and LO concepts into one model to address the knowing-doing gap problem. Methods During the model development process, the authors searched several databases for relevant LO frameworks and their related concepts by using a broad search strategy. To identify the key LO frameworks and consolidate them into one model, the authors used consensus-based decision-making and a narrative thematic synthesis guided by several qualitative criteria. The authors subjected the model to external, independent review and improved upon its design with this feedback. Results The authors found seven LO frameworks particularly relevant to evidence-based practice innovations in organizations. The authors describe their interpretations of these frameworks for healthcare organizations, the process they used to integrate the LO frameworks with EBM principles, and the resulting Evidence in the Learning Organization (ELO) model. They also provide a health organization scenario to illustrate ELO concepts in application. Conclusion The authors intend, by sharing the LO frameworks and the ELO model, to help organizations identify their capacities to learn and share knowledge about evidence-based practice innovations. The ELO model will need further validation and improvement through its use in organizational settings and applied health services research. PMID:19323819

  18. Assessing the impact of healthcare research: A systematic review of methodological frameworks

    PubMed Central

    Keeley, Thomas J.; Calvert, Melanie J.

    2017-01-01

    Background Increasingly, researchers need to demonstrate the impact of their research to their sponsors, funders, and fellow academics. However, the most appropriate way of measuring the impact of healthcare research is subject to debate. We aimed to identify the existing methodological frameworks used to measure healthcare research impact and to summarise the common themes and metrics in an impact matrix. Methods and findings Two independent investigators systematically searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), the Excerpta Medica Database (EMBASE), the Cumulative Index to Nursing and Allied Health Literature (CINAHL+), the Health Management Information Consortium, and the Journal of Research Evaluation from inception until May 2017 for publications that presented a methodological framework for research impact. We then summarised the common concepts and themes across methodological frameworks and identified the metrics used to evaluate differing forms of impact. Twenty-four unique methodological frameworks were identified, addressing 5 broad categories of impact: (1) ‘primary research-related impact’, (2) ‘influence on policy making’, (3) ‘health and health systems impact’, (4) ‘health-related and societal impact’, and (5) ‘broader economic impact’. These categories were subdivided into 16 common impact subgroups. Authors of the included publications proposed 80 different metrics aimed at measuring impact in these areas. The main limitation of the study was the potential exclusion of relevant articles, as a consequence of the poor indexing of the databases searched. Conclusions The measurement of research impact is an essential exercise to help direct the allocation of limited research resources, to maximise research benefit, and to help minimise research waste. This review provides a collective summary of existing methodological frameworks for research impact, which funders may use to inform the measurement of research impact and researchers may use to inform study design decisions aimed at maximising the short-, medium-, and long-term impact of their research. PMID:28792957

  19. A depth-first search algorithm to compute elementary flux modes by linear programming.

    PubMed

    Quek, Lake-Ee; Nielsen, Lars K

    2014-07-30

    The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.

  20. Monitoring the ability to deliver care in low- and middle-income countries: a systematic review of health facility assessment tools

    PubMed Central

    Nickerson, Jason W; Adams, Orvill; Attaran, Amir; Hatcher-Roberts, Janet; Tugwell, Peter

    2015-01-01

    Introduction Health facilities assessments are an essential instrument for health system strengthening in low- and middle-income countries. These assessments are used to conduct health facility censuses to assess the capacity of the health system to deliver health care and to identify gaps in the coverage of health services. Despite the valuable role of these assessments, there are currently no minimum standards or frameworks for these tools. Methods We used a structured keyword search of the MEDLINE, EMBASE and HealthStar databases and searched the websites of the World Health Organization, the World Bank and the International Health Facilities Assessment Network to locate all available health facilities assessment tools intended for use in low- and middle-income countries. We parsed the various assessment tools to identify similarities between them, which we catalogued into a framework comprising 41 assessment domains. Results We identified 10 health facility assessment tools meeting our inclusion criteria, all of which were included in our analysis. We found substantial variation in the comprehensiveness of the included tools, with the assessments containing indicators in 13 to 33 (median: 25.5) of the 41 assessment domains included in our framework. None of the tools collected data on all 41 of the assessment domains we identified. Conclusions Not only do a large number of health facility assessment tools exist, but the data they collect and methods they employ are very different. This certainly limits the comparability of the data between different countries’ health systems and probably creates blind spots that impede efforts to strengthen those systems. Agreement is needed on the essential elements of health facility assessments to guide the development of specific indicators and for refining existing instruments. PMID:24895350

  1. Using Genetic Programming with Prior Formula Knowledge to Solve Symbolic Regression Problem.

    PubMed

    Lu, Qiang; Ren, Jun; Wang, Zhiguang

    2016-01-01

    A researcher can infer mathematical expressions of functions quickly by using his professional knowledge (called Prior Knowledge). But the results he finds may be biased and restricted to his research field due to limitation of his knowledge. In contrast, Genetic Programming method can discover fitted mathematical expressions from the huge search space through running evolutionary algorithms. And its results can be generalized to accommodate different fields of knowledge. However, since GP has to search a huge space, its speed of finding the results is rather slow. Therefore, in this paper, a framework of connection between Prior Formula Knowledge and GP (PFK-GP) is proposed to reduce the space of GP searching. The PFK is built based on the Deep Belief Network (DBN) which can identify candidate formulas that are consistent with the features of experimental data. By using these candidate formulas as the seed of a randomly generated population, PFK-GP finds the right formulas quickly by exploring the search space of data features. We have compared PFK-GP with Pareto GP on regression of eight benchmark problems. The experimental results confirm that the PFK-GP can reduce the search space and obtain the significant improvement in the quality of SR.

  2. Job-Searching Expectations, Expectancy Violations, and Communication Strategies of Recent College Graduates

    ERIC Educational Resources Information Center

    Smith, Stephanie A.

    2017-01-01

    Expectancy violations theory, a communicative framework, is applied in this study to understand how recent college graduates form, evaluate, and respond to violated job-searching expectations. In-depth interviews of college seniors (N = 20) who were currently job searching helped answer the three research questions posed. Using a thematic…

  3. blastjs: a BLAST+ wrapper for Node.js.

    PubMed

    Page, Martin; MacLean, Dan; Schudoma, Christian

    2016-02-27

    To cope with the ever-increasing amount of sequence data generated in the field of genomics, the demand for efficient and fast database searches that drive functional and structural annotation in both large- and small-scale genome projects is on the rise. The tools of the BLAST+ suite are the most widely employed bioinformatic method for these database searches. Recent trends in bioinformatics application development show an increasing number of JavaScript apps that are based on modern frameworks such as Node.js. Until now, there is no way of using database searches with the BLAST+ suite from a Node.js codebase. We developed blastjs, a Node.js library that wraps the search tools of the BLAST+ suite and thus allows to easily add significant functionality to any Node.js-based application. blastjs is a library that allows the incorporation of BLAST+ functionality into bioinformatics applications based on JavaScript and Node.js. The library was designed to be as user-friendly as possible and therefore requires only a minimal amount of code in the client application. The library is freely available under the MIT license at https://github.com/teammaclean/blastjs.

  4. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    NASA Astrophysics Data System (ADS)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  5. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework

    PubMed Central

    2012-01-01

    Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909

  6. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    PubMed

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  7. Legislation for Youth Sport Concussion in Canada: Review, Conceptual Framework, and Recommendations.

    PubMed

    Russell, Kelly; Ellis, Michael J; Bauman, Shannon; Tator, Charles H

    2017-05-01

    In this article, we conduct a review of introduced and enacted youth concussion legislation in Canada and present a conceptual framework and recommendations for future youth sport concussion laws. We conducted online searches of federal, provincial, and territorial legislatures to identify youth concussion bills that were introduced or successfully enacted into law. Internet searches were carried out from July 26 and 27, 2016. Online searches identified six youth concussion bills that were introduced in provincial legislatures, including two in Ontario and Nova Scotia and one each in British Columbia and Quebec. One of these bills (Ontario Bill 149, Rowan's Law Advisory Committee Act, 2016) was enacted into provincial law; it is not actual concussion legislation, but rather a framework for possible enactment of legislation. Two bills have been introduced in federal parliament but neither bill has been enacted into law. At present, there is no provincial or federal concussion legislation that directly legislates concussion education, prevention, management, or policy in youth sports in Canada. The conceptual framework and recommendations presented here should be used to guide the design and implementation of future youth sport concussion laws in Canada.

  8. Enabling the extended compact genetic algorithm for real-parameter optimization by using adaptive discretization.

    PubMed

    Chen, Ying-ping; Chen, Chao-Hong

    2010-01-01

    An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.

  9. Directed area search using socio-biological vision algorithms and cognitive Bayesian reasoning

    NASA Astrophysics Data System (ADS)

    Medasani, S.; Owechko, Y.; Allen, D.; Lu, T. C.; Khosla, D.

    2010-04-01

    Volitional search systems that assist the analyst by searching for specific targets or objects such as vehicles, factories, airports, etc in wide area overhead imagery need to overcome multiple problems present in current manual and automatic approaches. These problems include finding targets hidden in terabytes of information, relatively few pixels on targets, long intervals between interesting regions, time consuming analysis requiring many analysts, no a priori representative examples or templates of interest, detecting multiple classes of objects, and the need for very high detection rates and very low false alarm rates. This paper describes a conceptual analyst-centric framework that utilizes existing technology modules to search and locate occurrences of targets of interest (e.g., buildings, mobile targets of military significance, factories, nuclear plants, etc.), from video imagery of large areas. Our framework takes simple queries from the analyst and finds the queried targets with relatively minimum interaction from the analyst. It uses a hybrid approach that combines biologically inspired bottom up attention, socio-biologically inspired object recognition for volitionally recognizing targets, and hierarchical Bayesian networks for modeling and representing the domain knowledge. This approach has the benefits of high accuracy, low false alarm rate and can handle both low-level visual information and high-level domain knowledge in a single framework. Such a system would be of immense help for search and rescue efforts, intelligence gathering, change detection systems, and other surveillance systems.

  10. A Flexible Computational Framework Using R and Map-Reduce for Permutation Tests of Massive Genetic Analysis of Complex Traits.

    PubMed

    Mahjani, Behrang; Toor, Salman; Nettelblad, Carl; Holmgren, Sverker

    2017-01-01

    In quantitative trait locus (QTL) mapping significance of putative QTL is often determined using permutation testing. The computational needs to calculate the significance level are immense, 10 4 up to 10 8 or even more permutations can be needed. We have previously introduced the PruneDIRECT algorithm for multiple QTL scan with epistatic interactions. This algorithm has specific strengths for permutation testing. Here, we present a flexible, parallel computing framework for identifying multiple interacting QTL using the PruneDIRECT algorithm which uses the map-reduce model as implemented in Hadoop. The framework is implemented in R, a widely used software tool among geneticists. This enables users to rearrange algorithmic steps to adapt genetic models, search algorithms, and parallelization steps to their needs in a flexible way. Our work underlines the maturity of accessing distributed parallel computing for computationally demanding bioinformatics applications through building workflows within existing scientific environments. We investigate the PruneDIRECT algorithm, comparing its performance to exhaustive search and DIRECT algorithm using our framework on a public cloud resource. We find that PruneDIRECT is vastly superior for permutation testing, and perform 2 ×10 5 permutations for a 2D QTL problem in 15 hours, using 100 cloud processes. We show that our framework scales out almost linearly for a 3D QTL search.

  11. A predictive framework for evaluating models of semantic organization in free recall

    PubMed Central

    Morton, Neal W; Polyn, Sean M.

    2016-01-01

    Research in free recall has demonstrated that semantic associations reliably influence the organization of search through episodic memory. However, the specific structure of these associations and the mechanisms by which they influence memory search remain unclear. We introduce a likelihood-based model-comparison technique, which embeds a model of semantic structure within the context maintenance and retrieval (CMR) model of human memory search. Within this framework, model variants are evaluated in terms of their ability to predict the specific sequence in which items are recalled. We compare three models of semantic structure, latent semantic analysis (LSA), global vectors (GloVe), and word association spaces (WAS), and find that models using WAS have the greatest predictive power. Furthermore, we find evidence that semantic and temporal organization is driven by distinct item and context cues, rather than a single context cue. This finding provides important constraint for theories of memory search. PMID:28331243

  12. Alchemist multimodal workflows for diabetic retinopathy research, disease prevention and investigational drug discovery.

    PubMed

    Riposan, Adina; Taylor, Ian; Owens, David R; Rana, Omer; Conley, Edward C

    2007-01-01

    In this paper we present mechanisms for imaging and spectral data discovery, as applied to the early detection of pathologic mechanisms underlying diabetic retinopathy in research and clinical trial scenarios. We discuss the Alchemist framework, built using a generic peer-to-peer architecture, supporting distributed database queries and complex search algorithms based on workflow. The Alchemist is a domain-independent search mechanism that can be applied to search and data discovery scenarios in many areas. We illustrate Alchemist's ability to perform complex searches composed as a collection of peer-to-peer overlays, Grid-based services and workflows, e.g. applied to image and spectral data discovery, as applied to the early detection and prevention of retinal disease and investigational drug discovery. The Alchemist framework is built on top of decentralised technologies and uses industry standards such as Web services and SOAP for messaging.

  13. SIRW: A web server for the Simple Indexing and Retrieval System that combines sequence motif searches with keyword searches.

    PubMed

    Ramu, Chenna

    2003-07-01

    SIRW (http://sirw.embl.de/) is a World Wide Web interface to the Simple Indexing and Retrieval System (SIR) that is capable of parsing and indexing various flat file databases. In addition it provides a framework for doing sequence analysis (e.g. motif pattern searches) for selected biological sequences through keyword search. SIRW is an ideal tool for the bioinformatics community for searching as well as analyzing biological sequences of interest.

  14. OpenSearch (ECHO-ESIP) & REST API for Earth Science Data Access

    NASA Astrophysics Data System (ADS)

    Mitchell, A.; Cechini, M.; Pilone, D.

    2010-12-01

    This presentation will provide a brief technical overview of OpenSearch, the Earth Science Information Partners (ESIP) Federated Search framework, and the REST architecture; discuss NASA’s Earth Observing System (EOS) ClearingHOuse’s (ECHO) implementation lessons learned; and demonstrate the simplified usage of these technologies. SOAP, as a framework for web service communication has numerous advantages for Enterprise applications and Java/C# type programming languages. As a technical solution, SOAP has been a reliable framework on top of which many applications have been successfully developed and deployed. However, as interest grows for quick development cycles and more intriguing “mashups,” the SOAP API loses its appeal. Lightweight and simple are the vogue characteristics that are sought after. Enter the REST API architecture and OpenSearch format. Both of these items provide a new path for application development addressing some of the issues unresolved by SOAP. ECHO has made available all of its discovery, order submission, and data management services through a publicly accessible SOAP API. This interface is utilized by a variety of ECHO client and data partners to provide valuable capabilities to end users. As ECHO interacted with current and potential partners looking to develop Earth Science tools utilizing ECHO, it became apparent that the development overhead required to interact with the SOAP API was a growing barrier to entry. ECHO acknowledged the technical issues that were being uncovered by its partner community and chose to provide two new interfaces for interacting with the ECHO metadata catalog. The first interface is built upon the OpenSearch format and ESIP Federated Search framework. Leveraging these two items, a client (ECHO-ESIP) was developed with a focus on simplified searching and results presentation. The second interface is built upon the Representational State Transfer (REST) architecture. Leveraging the REST architecture, a new API has been made available that will provide access to the entire SOAP API suite of services. The results of these development activities has not only positioned to engage in the thriving world of mashup applications, but also provided an excellent real-world case study of how to successfully leverage these emerging technologies.

  15. Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment

    PubMed Central

    Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  16. Health system frameworks and performance indicators in eight countries: A comparative international analysis

    PubMed Central

    Braithwaite, Jeffrey; Hibbert, Peter; Blakely, Brette; Plumb, Jennifer; Hannaford, Natalie; Long, Janet Cameron; Marks, Danielle

    2017-01-01

    Objectives: Performance indicators are a popular mechanism for measuring the quality of healthcare to facilitate both quality improvement and systems management. Few studies make comparative assessments of different countries’ performance indicator frameworks. This study identifies and compares frameworks and performance indicators used in selected Organisation for Economic Co-operation and Development health systems to measure and report on the performance of healthcare organisations and local health systems. Countries involved are Australia, Canada, Denmark, England, the Netherlands, New Zealand, Scotland and the United States. Methods: Identification of comparable international indicators and analyses of their characteristics and of their broader national frameworks and contexts were undertaken. Two dimensions of indicators – that they are nationally consistent (used across the country rather than just regionally) and locally relevant (measured and reported publicly at a local level, for example, a health service) – were deemed important. Results: The most commonly used domains in performance frameworks were safety, effectiveness and access. The search found 401 indicators that fulfilled the ‘nationally consistent and locally relevant’ criteria. Of these, 45 indicators are reported in more than one country. Cardiovascular, surgery and mental health were the most frequently reported disease groups. Conclusion: These comparative data inform researchers and policymakers internationally when designing health performance frameworks and indicator sets. PMID:28228948

  17. Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models

    ERIC Educational Resources Information Center

    Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.

    2016-01-01

    This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…

  18. Exploring outcomes and evaluation in narrative pedagogy: An integrative review.

    PubMed

    Brady, Destiny R; Asselin, Marilyn E

    2016-10-01

    To identify narrative pedagogy learning outcomes and evaluation methods used for pre-licensure nursing students. Recommend areas for expanding narrative pedagogy research. An integrative review using a modified version of Cooper's 1998 framework, as described by Whittemore and Knafl (2005). A computer-assisted search of the literature from 1995 to 2015 was performed using the search terms narrative pedagogy and nursing. Databases included the Cumulative Index to Nursing and Allied Health Literature, Academic Search Premier, Educational Resources Information Center, Educational Research Complete, Medline, PsychArticles, PsychINFO, and the Teacher Reference Center. Ancestry searches led to the inclusion of additional articles. Twenty-six texts met the criteria for full review and were evaluated for methodological rigor and relevance to the review aims. Nine articles achieved an acceptable quality score and were used for thematic analysis. Learning outcomes associated with narrative pedagogy were grouped into five themes: thinking, empowerment, interconnectedness, learning as a process of making meaning, and ethical/moral judgment. Multiple methods of evaluation are necessary to evaluate these learning outcomes. Narrative pedagogy may be a beneficial philosophical approach to teaching. However, at this time, there is insufficient evidence to recommend its universal adoption. It is too broad in its approach to reliably measure its effectiveness. Future research should examine the effectiveness of specific teaching strategies to promote desired learning outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology

    PubMed Central

    Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.

    2017-01-01

    Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). Conclusions These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling. PMID:28813442

  20. Empirical studies assessing the quality of health information for consumers on the world wide web: a systematic review.

    PubMed

    Eysenbach, Gunther; Powell, John; Kuss, Oliver; Sa, Eun-Ryoung

    The quality of consumer health information on the World Wide Web is an important issue for medicine, but to date no systematic and comprehensive synthesis of the methods and evidence has been performed. To establish a methodological framework on how quality on the Web is evaluated in practice, to determine the heterogeneity of the results and conclusions, and to compare the methodological rigor of these studies, to determine to what extent the conclusions depend on the methodology used, and to suggest future directions for research. We searched MEDLINE and PREMEDLINE (1966 through September 2001), Science Citation Index (1997 through September 2001), Social Sciences Citation Index (1997 through September 2001), Arts and Humanities Citation Index (1997 through September 2001), LISA (1969 through July 2001), CINAHL (1982 through July 2001), PsychINFO (1988 through September 2001), EMBASE (1988 through June 2001), and SIGLE (1980 through June 2001). We also conducted hand searches, general Internet searches, and a personal bibliographic database search. We included published and unpublished empirical studies in any language in which investigators searched the Web systematically for specific health information, evaluated the quality of Web sites or pages, and reported quantitative results. We screened 7830 citations and retrieved 170 potentially eligible full articles. A total of 79 distinct studies met the inclusion criteria, evaluating 5941 health Web sites and 1329 Web pages, and reporting 408 evaluation results for 86 different quality criteria. Two reviewers independently extracted study characteristics, medical domains, search strategies used, methods and criteria of quality assessment, results (percentage of sites or pages rated as inadequate pertaining to a quality criterion), and quality and rigor of study methods and reporting. Most frequently used quality criteria used include accuracy, completeness, readability, design, disclosures, and references provided. Fifty-five studies (70%) concluded that quality is a problem on the Web, 17 (22%) remained neutral, and 7 studies (9%) came to a positive conclusion. Positive studies scored significantly lower in search (P =.02) and evaluation (P =.04) methods. Due to differences in study methods and rigor, quality criteria, study population, and topic chosen, study results and conclusions on health-related Web sites vary widely. Operational definitions of quality criteria are needed.

  1. An Evaluation of the Generalized Intelligent Framework for Tutoring (GIFT) from a Learner’s Perspective

    DTIC Science & Technology

    2014-12-01

    on the topic that included questions more in-depth than the pretest . The results from this experiment suggested that students learned significantly...information on the slide is important and what is not important. To alleviate this, the design of the slides makes use of PowerPoint bullets or numbers to...2003;41(1):77–86. Bloom BS. The 2 sigma problem: the search for methods of group instruction as effective as one - to- one tutoring. Educational

  2. How best to structure interdisciplinary primary care teams: the study protocol for a systematic review with narrative framework synthesis.

    PubMed

    Wranik, W Dominika; Hayden, Jill A; Price, Sheri; Parker, Robin M N; Haydt, Susan M; Edwards, Jeanette M; Suter, Esther; Katz, Alan; Gambold, Liesl L; Levy, Adrian R

    2016-10-04

    Western publicly funded health care systems increasingly rely on interdisciplinary teams to support primary care delivery and management of chronic conditions. This knowledge synthesis focuses on what is known in the academic and grey literature about optimal structural characteristics of teams. Its goal is to assess which factors contribute to the effective functioning of interdisciplinary primary care teams and improved health system outcomes, with specific focus on (i) team structure contribution to team process, (ii) team process contribution to primary care goals, and (iii) team structure contribution to primary care goals. The systematic search of academic literature focuses on four chronic conditions and co-morbidities. Within this scope, qualitative and quantitative studies that assess the effects of team characteristics (funding, governance, organization) on care process and patient outcomes will be searched. Electronic databases (Ovid MEDLINE, Embase, CINAHL, PAIS, Web of Science) will be searched systematically. Online web-based searches will be supported by the Grey Matters Tool. Studies will be included, if they report on interdisciplinary primary care in publicly funded Western health systems, and address the relationships between team structure, process, and/or patient outcomes. Studies will be selected in a three-stage screening process (title/abstract/full text) by two independent reviewers in each stage. Study quality will be assessed using the Mixed Methods Assessment Tool. An a priori framework will be applied to data extraction, and a narrative framework approach is used for the synthesis. Using an integrated knowledge translation approach, an electronic decision support tool will be developed for decision makers. It will be searchable along two axes of inquiry: (i) what primary care goals are supported by specific team characteristics and (ii) how should teams be structured to support specific primary care goals? The results of this evidence review will contribute directly to the design of interdisciplinary primary care teams. The optimized design will support the goals of primary care, contributing to the improved health of populations. PROSPERO CRD42016041884.

  3. Systematic review of the application of the plan–do–study–act method to improve quality in healthcare

    PubMed Central

    Taylor, Michael J; McNicholas, Chris; Nicolay, Chris; Darzi, Ara; Bell, Derek; Reed, Julie E

    2014-01-01

    Background Plan–do–study–act (PDSA) cycles provide a structure for iterative testing of changes to improve quality of systems. The method is widely accepted in healthcare improvement; however there is little overarching evaluation of how the method is applied. This paper proposes a theoretical framework for assessing the quality of application of PDSA cycles and explores the consistency with which the method has been applied in peer-reviewed literature against this framework. Methods NHS Evidence and Cochrane databases were searched by three independent reviewers. Empirical studies were included that reported application of the PDSA method in healthcare. Application of PDSA cycles was assessed against key features of the method, including documentation characteristics, use of iterative cycles, prediction-based testing of change, initial small-scale testing and use of data over time. Results 73 of 409 individual articles identified met the inclusion criteria. Of the 73 articles, 47 documented PDSA cycles in sufficient detail for full analysis against the whole framework. Many of these studies reported application of the PDSA method that failed to accord with primary features of the method. Less than 20% (14/73) fully documented the application of a sequence of iterative cycles. Furthermore, a lack of adherence to the notion of small-scale change is apparent and only 15% (7/47) reported the use of quantitative data at monthly or more frequent data intervals to inform progression of cycles. Discussion To progress the development of the science of improvement, a greater understanding of the use of improvement methods, including PDSA, is essential to draw reliable conclusions about their effectiveness. This would be supported by the development of systematic and rigorous standards for the application and reporting of PDSAs. PMID:24025320

  4. A scoping review of scoping reviews: advancing the approach and enhancing the consistency.

    PubMed

    Pham, Mai T; Rajić, Andrijana; Greig, Judy D; Sargeant, Jan M; Papadopoulos, Andrew; McEwen, Scott A

    2014-12-01

    The scoping review has become an increasingly popular approach for synthesizing research evidence. It is a relatively new approach for which a universal study definition or definitive procedure has not been established. The purpose of this scoping review was to provide an overview of scoping reviews in the literature. A scoping review was conducted using the Arksey and O'Malley framework. A search was conducted in four bibliographic databases and the gray literature to identify scoping review studies. Review selection and characterization were performed by two independent reviewers using pretested forms. The search identified 344 scoping reviews published from 1999 to October 2012. The reviews varied in terms of purpose, methodology, and detail of reporting. Nearly three-quarter of reviews (74.1%) addressed a health topic. Study completion times varied from 2 weeks to 20 months, and 51% utilized a published methodological framework. Quality assessment of included studies was infrequently performed (22.38%). Scoping reviews are a relatively new but increasingly common approach for mapping broad topics. Because of variability in their conduct, there is a need for their methodological standardization to ensure the utility and strength of evidence. © 2014 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.

  5. Toward a Mixed-Methods Research Approach to Content Analysis in The Digital Age: The Combined Content-Analysis Model and its Applications to Health Care Twitter Feeds.

    PubMed

    Hamad, Eradah O; Savundranayagam, Marie Y; Holmes, Jeffrey D; Kinsella, Elizabeth Anne; Johnson, Andrew M

    2016-03-08

    Twitter's 140-character microblog posts are increasingly used to access information and facilitate discussions among health care professionals and between patients with chronic conditions and their caregivers. Recently, efforts have emerged to investigate the content of health care-related posts on Twitter. This marks a new area for researchers to investigate and apply content analysis (CA). In current infodemiology, infoveillance and digital disease detection research initiatives, quantitative and qualitative Twitter data are often combined, and there are no clear guidelines for researchers to follow when collecting and evaluating Twitter-driven content. The aim of this study was to identify studies on health care and social media that used Twitter feeds as a primary data source and CA as an analysis technique. We evaluated the resulting 18 studies based on a narrative review of previous methodological studies and textbooks to determine the criteria and main features of quantitative and qualitative CA. We then used the key features of CA and mixed-methods research designs to propose the combined content-analysis (CCA) model as a solid research framework for designing, conducting, and evaluating investigations of Twitter-driven content. We conducted a PubMed search to collect studies published between 2010 and 2014 that used CA to analyze health care-related tweets. The PubMed search and reference list checks of selected papers identified 21 papers. We excluded 3 papers and further analyzed 18. Results suggest that the methods used in these studies were not purely quantitative or qualitative, and the mixed-methods design was not explicitly chosen for data collection and analysis. A solid research framework is needed for researchers who intend to analyze Twitter data through the use of CA. We propose the CCA model as a useful framework that provides a straightforward approach to guide Twitter-driven studies and that adds rigor to health care social media investigations. We provide suggestions for the use of the CCA model in elder care-related contexts.

  6. Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.

    PubMed

    Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu

    2017-07-01

    In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.

  7. A Systematic Literature Mapping of Risk Analysis of Big Data in Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Bee Yusof Ali, Hazirah; Marziana Abdullah, Lili; Kartiwi, Mira; Nordin, Azlin; Salleh, Norsaremah; Sham Awang Abu Bakar, Normi

    2018-05-01

    This paper investigates previous literature that focusses on the three elements: risk assessment, big data and cloud. We use a systematic literature mapping method to search for journals and proceedings. The systematic literature mapping process is utilized to get a properly screened and focused literature. With the help of inclusion and exclusion criteria, the search of literature is further narrowed. Classification helps us in grouping the literature into categories. At the end of the mapping, gaps can be seen. The gap is where our focus should be in analysing risk of big data in cloud computing environment. Thus, a framework of how to assess the risk of security, privacy and trust associated with big data and cloud computing environment is highly needed.

  8. Use of unsolicited first-person written illness narratives in research: systematic review.

    PubMed

    O'Brien, Mary R; Clark, David

    2010-08-01

    This paper is a report of a methodological systematic review conducted to critically analyze the use of unsolicited first-person written illness narratives for research purposes. Documenting illness experiences through written narratives enables individuals to record the impact of illness on themselves and those closest to them. In health research, unsolicited first-person written illness narratives are recognized increasingly as legitimate data sources. To date there has been no critical evaluation of the method. The ISI Web of Knowledge; CINAHL; PubMed; MEDLINE; PsycINFO; Science Direct; Cochrane Library databases and the internet metasearch engine 'Dogpile' were searched for the period up to 2009. The search terms were: 'patient experience', 'narratives', 'autobiography', 'pathography', 'written narratives', 'illness narratives', 'internet', 'published', 'unsolicited'. Recommendations within the Centre for Reviews and Dissemination guidance informed the review. Eligible studies were evaluated according to inclusion/exclusion criteria. The data were extracted by one reviewer and monitored by the second reviewer. Eighteen papers met the inclusion criteria, 12 from the original search in 2008 and six from the updated search in October 2009. Nine used unpublished (internet) narratives, eight used published (print) accounts and one drew on both genres. The method has been used to explore a wide range of illness experiences. There was lack of agreement on key terminology. Methodological issues were often poorly-described, and confused ethical stances were reported. The lack of methodological detail in published papers requires attention if this method is to be used effectively in healthcare research. The confused ethical stance adopted needs to be addressed and clarified. A theoretical conceptual framework to define and describe the method accurately is urgently required.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorai, Prashun; Toberer, Eric S.; Stevanović, Vladan

    Here, at room temperature and above, most magnetic materials adopt a spin-disordered (paramagnetic) state whose electronic properties can differ significantly from their low-temperature, spin-ordered counterparts. Yet computational searches for new functional materials usually assume some type of magnetic order. In the present work, we demonstrate a methodology to incorporate spin disorder in computational searches and predict the electronic properties of the paramagnetic phase. We implement this method in a high-throughput framework to assess the potential for thermoelectric performance of 1350 transition-metal sulfides and find that all magnetic systems we identify as promising in the spin-ordered ground state cease to bemore » promising in the paramagnetic phase due to disorder-induced deterioration of the charge carrier transport properties. We also identify promising non-magnetic candidates that do not suffer from these spin disorder effects. In addition to identifying promising materials, our results offer insights into the apparent scarcity of magnetic systems among known thermoelectrics and highlight the importance of including spin disorder in computational searches.« less

  10. Private algorithms for the protected in social network search

    PubMed Central

    Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory

    2016-01-01

    Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets. PMID:26755606

  11. Private algorithms for the protected in social network search.

    PubMed

    Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory

    2016-01-26

    Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets.

  12. An Effective Hybrid Cuckoo Search Algorithm with Improved Shuffled Frog Leaping Algorithm for 0-1 Knapsack Problems

    PubMed Central

    Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  13. Validity of instruments to measure physical activity may be questionable due to a lack of conceptual frameworks: a systematic review

    PubMed Central

    2011-01-01

    Background Guidance documents for the development and validation of patient-reported outcomes (PROs) advise the use of conceptual frameworks, which outline the structure of the concept that a PRO aims to measure. It is unknown whether currently available PROs are based on conceptual frameworks. This study, which was limited to a specific case, had the following aims: (i) to identify conceptual frameworks of physical activity in chronic respiratory patients or similar populations (chronic heart disease patients or the elderly) and (ii) to assess whether the development and validation of PROs to measure physical activity in these populations were based on a conceptual framework of physical activity. Methods Two systematic reviews were conducted through searches of the Medline, Embase, PsycINFO, and Cinahl databases prior to January 2010. Results In the first review, only 2 out of 581 references pertaining to physical activity in the defined populations provided a conceptual framework of physical activity in COPD patients. In the second review, out of 103 studies developing PROs to measure physical activity or related constructs, none were based on a conceptual framework of physical activity. Conclusions These findings raise concerns about how the large body of evidence from studies that use physical activity PRO instruments should be evaluated by health care providers, guideline developers, and regulatory agencies. PMID:21967887

  14. Evaluation of the causal framework used for setting national ambient air quality standards.

    PubMed

    Goodman, Julie E; Prueitt, Robyn L; Sax, Sonja N; Bailey, Lisa A; Rhomberg, Lorenz R

    2013-11-01

    Abstract A scientifically sound assessment of the potential hazards associated with a substance requires a systematic, objective and transparent evaluation of the weight of evidence (WoE) for causality of health effects. We critically evaluated the current WoE framework for causal determination used in the United States Environmental Protection Agency's (EPA's) assessments of the scientific data on air pollutants for the National Ambient Air Quality Standards (NAAQS) review process, including its methods for literature searches; study selection, evaluation and integration; and causal judgments. The causal framework used in recent NAAQS evaluations has many valuable features, but it could be more explicit in some cases, and some features are missing that should be included in every WoE evaluation. Because of this, it has not always been applied consistently in evaluations of causality, leading to conclusions that are not always supported by the overall WoE, as we demonstrate using EPA's ozone Integrated Science Assessment as a case study. We propose additions to the NAAQS causal framework based on best practices gleaned from a previously conducted survey of available WoE frameworks. A revision of the NAAQS causal framework so that it more closely aligns with these best practices and the full and consistent application of the framework will improve future assessments of the potential health effects of criteria air pollutants by making the assessments more thorough, transparent, and scientifically sound.

  15. How equity is addressed in clinical practice guidelines: a content analysis

    PubMed Central

    Shi, Chunhu; Tian, Jinhui; Wang, Quan; Petkovic, Jennifer; Ren, Dan; Yang, Kehu; Yang, Yang

    2014-01-01

    Objectives Considering equity into guidelines presents methodological challenges. This study aims to qualitatively synthesise the methods for incorporating equity in clinical practice guidelines (CPGs). Setting Content analysis of methodological publications. Eligibility criteria for selecting studies Methodological publications were included if they provided checklists/frameworks on when, how and to what extent equity should be incorporated in CPGs. Data sources We electronically searched MEDLINE, retrieved references, and browsed guideline development organisation websites from inception to January 2013. After study selection by two authors, general characteristics and checklists items/framework components from included studies were extracted. Based on the questions or items from checklists/frameworks (unit of analysis), content analysis was conducted to identify themes and questions/items were grouped into these themes. Primary outcomes The primary outcomes were methodological themes and processes on how to address equity issues in guideline development. Results 8 studies with 10 publications were included from 3405 citations. In total, a list of 87 questions/items was generated from 17 checklists/frameworks. After content analysis, questions were grouped into eight themes (‘scoping questions’, ‘searching relevant evidence’, ‘appraising evidence and recommendations’, ‘formulating recommendations’, ‘monitoring implementation’, ‘providing a flow chart to include equity in CPGs’, and ‘others: reporting of guidelines and comments from stakeholders’ for CPG developers and ‘assessing the quality of CPGs’ for CPG users). Four included studies covered more than five of these themes. We also summarised the process of guideline development based on the themes mentioned above. Conclusions For disadvantaged population-specific CPGs, eight important methodological issues identified in this review should be considered when including equity in CPGs under the guidance of a scientific guideline development manual. PMID:25479795

  16. GSNFS: Gene subnetwork biomarker identification of lung cancer expression data.

    PubMed

    Doungpan, Narumol; Engchuan, Worrawat; Chan, Jonathan H; Meechai, Asawin

    2016-12-05

    Gene expression has been used to identify disease gene biomarkers, but there are ongoing challenges. Single gene or gene-set biomarkers are inadequate to provide sufficient understanding of complex disease mechanisms and the relationship among those genes. Network-based methods have thus been considered for inferring the interaction within a group of genes to further study the disease mechanism. Recently, the Gene-Network-based Feature Set (GNFS), which is capable of handling case-control and multiclass expression for gene biomarker identification, has been proposed, partly taking into account of network topology. However, its performance relies on a greedy search for building subnetworks and thus requires further improvement. In this work, we establish a new approach named Gene Sub-Network-based Feature Selection (GSNFS) by implementing the GNFS framework with two proposed searching and scoring algorithms, namely gene-set-based (GS) search and parent-node-based (PN) search, to identify subnetworks. An additional dataset is used to validate the results. The two proposed searching algorithms of the GSNFS method for subnetwork expansion are concerned with the degree of connectivity and the scoring scheme for building subnetworks and their topology. For each iteration of expansion, the neighbour genes of a current subnetwork, whose expression data improved the overall subnetwork score, is recruited. While the GS search calculated the subnetwork score using an activity score of a current subnetwork and the gene expression values of its neighbours, the PN search uses the expression value of the corresponding parent of each neighbour gene. Four lung cancer expression datasets were used for subnetwork identification. In addition, using pathway data and protein-protein interaction as network data in order to consider the interaction among significant genes were discussed. Classification was performed to compare the performance of the identified gene subnetworks with three subnetwork identification algorithms. The two searching algorithms resulted in better classification and gene/gene-set agreement compared to the original greedy search of the GNFS method. The identified lung cancer subnetwork using the proposed searching algorithm resulted in an improvement of the cross-dataset validation and an increase in the consistency of findings between two independent datasets. The homogeneity measurement of the datasets was conducted to assess dataset compatibility in cross-dataset validation. The lung cancer dataset with higher homogeneity showed a better result when using the GS search while the dataset with low homogeneity showed a better result when using the PN search. The 10-fold cross-dataset validation on the independent lung cancer datasets showed higher classification performance of the proposed algorithms when compared with the greedy search in the original GNFS method. The proposed searching algorithms provide a higher number of genes in the subnetwork expansion step than the greedy algorithm. As a result, the performance of the subnetworks identified from the GSNFS method was improved in terms of classification performance and gene/gene-set level agreement depending on the homogeneity of the datasets used in the analysis. Some common genes obtained from the four datasets using different searching algorithms are genes known to play a role in lung cancer. The improvement of classification performance and the gene/gene-set level agreement, and the biological relevance indicated the effectiveness of the GSNFS method for gene subnetwork identification using expression data.

  17. Object recognition in images via a factor graph model

    NASA Astrophysics Data System (ADS)

    He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu

    2018-04-01

    Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.

  18. Crisis in science: in search for new theoretical foundations.

    PubMed

    Schroeder, Marcin J

    2013-09-01

    Recognition of the need for theoretical biology more than half century ago did not bring substantial progress in this direction. Recently, the need for new methods in science, including physics became clear. The breakthrough should be sought in answering the question "What is life?", which can help to explain the mechanisms of consciousness and consequently give insight into the way we comprehend reality. This could help in the search for new methods in the study of both physical and biological phenomena. However, to achieve this, new theoretical discipline will have to be developed with a very general conceptual framework and rigor of mathematical reasoning, allowing it to assume the leading role in science. Since its foundations are in the recognition of the role of life and consciousness in the epistemic process, it could be called biomathics. The prime candidates proposed here for being the fundamental concepts for biomathics are 'information' and 'information integration', with an appropriately general mathematical formalism. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. YoTube: Searching Action Proposal Via Recurrent and Static Regression Networks

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyuan; Vial, Romain; Lu, Shijian; Peng, Xi; Fu, Huazhu; Tian, Yonghong; Cao, Xianbin

    2018-06-01

    In this paper, we present YoTube-a novel network fusion framework for searching action proposals in untrimmed videos, where each action proposal corresponds to a spatialtemporal video tube that potentially locates one human action. Our method consists of a recurrent YoTube detector and a static YoTube detector, where the recurrent YoTube explores the regression capability of RNN for candidate bounding boxes predictions using learnt temporal dynamics and the static YoTube produces the bounding boxes using rich appearance cues in a single frame. Both networks are trained using rgb and optical flow in order to fully exploit the rich appearance, motion and temporal context, and their outputs are fused to produce accurate and robust proposal boxes. Action proposals are finally constructed by linking these boxes using dynamic programming with a novel trimming method to handle the untrimmed video effectively and efficiently. Extensive experiments on the challenging UCF-101 and UCF-Sports datasets show that our proposed technique obtains superior performance compared with the state-of-the-art.

  20. Can we Build on Social Movement Theories to Develop and Improve Community-Based Participatory Research? A Framework Synthesis Review.

    PubMed

    Tremblay, Marie-Claude; Martin, Debbie H; Macaulay, Ann C; Pluye, Pierre

    2017-06-01

    A long-standing challenge in community-based participatory research (CBPR) has been to anchor practice and evaluation in a relevant and comprehensive theoretical framework of community change. This study describes the development of a multidimensional conceptual framework that builds on social movement theories to identify key components of CBPR processes. Framework synthesis was used as a general literature search and analysis strategy. An initial conceptual framework was developed from the theoretical literature on social movement. A literature search performed to identify illustrative CBPR projects yielded 635 potentially relevant documents, from which eight projects (corresponding to 58 publications) were retained after record and full-text screening. Framework synthesis was used to code and organize data from these projects, ultimately providing a refined framework. The final conceptual framework maps key concepts of CBPR mobilization processes, such as the pivotal role of the partnership; resources and opportunities as necessary components feeding the partnership's development; the importance of framing processes; and a tight alignment between the cause (partnership's goal), the collective action strategy, and the system changes targeted. The revised framework provides a context-specific model to generate a new, innovative understanding of CBPR mobilization processes, drawing on existing theoretical foundations. © 2017 The Authors American Journal of Community Psychology published by Wiley Periodicals, Inc. on behalf of Society for Community Research and Action.

  1. A Bayesian framework for infrasound location

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.; Arrowsmith, Stephen J.; Anderson, Dale N.

    2010-04-01

    We develop a framework for location of infrasound events using backazimuth and infrasonic arrival times from multiple arrays. Bayesian infrasonic source location (BISL) developed here estimates event location and associated credibility regions. BISL accounts for unknown source-to-array path or phase by formulating infrasonic group velocity as random. Differences between observed and predicted source-to-array traveltimes are partitioned into two additive Gaussian sources, measurement error and model error, the second of which accounts for the unknown influence of wind and temperature on path. By applying the technique to both synthetic tests and ground-truth events, we highlight the complementary nature of back azimuths and arrival times for estimating well-constrained event locations. BISL is an extension to methods developed earlier by Arrowsmith et al. that provided simple bounds on location using a grid-search technique.

  2. the Ĝ infrared search for extraterrestrial civilizations with large energy supplies. II. Framework, strategy, and first result

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, J. T.; Griffith, R. L.; Sigurdsson, S.

    We describe the framework and strategy of the Ĝ infrared search for extraterrestrial civilizations with large energy supplies, which will use the wide-field infrared surveys of WISE and Spitzer to search for these civilizations' waste heat. We develop a formalism for translating mid-infrared photometry into quantitative upper limits on extraterrestrial energy supplies. We discuss the likely sources of false positives, how dust can and will contaminate our search, and prospects for distinguishing dust from alien waste heat. We argue that galaxy-spanning civilizations may be easier to distinguish from natural sources than circumstellar civilizations (i.e., Dyson spheres), although GAIA will significantlymore » improve our capability to identify the latter. We present a zeroth order null result of our search based on the WISE all-sky catalog: we show, for the first time, that Kardashev Type III civilizations (as Kardashev originally defined them) are very rare in the local universe. More sophisticated searches can extend our methodology to smaller waste heat luminosities, and potentially entirely rule out (or detect) both Kardashev Type III civilizations and new physics that allows for unlimited 'free' energy generation.« less

  3. The Ĝ Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. II. Framework, Strategy, and First Result

    NASA Astrophysics Data System (ADS)

    Wright, J. T.; Griffith, R. L.; Sigurdsson, S.; Povich, M. S.; Mullan, B.

    2014-09-01

    We describe the framework and strategy of the Ĝ infrared search for extraterrestrial civilizations with large energy supplies, which will use the wide-field infrared surveys of WISE and Spitzer to search for these civilizations' waste heat. We develop a formalism for translating mid-infrared photometry into quantitative upper limits on extraterrestrial energy supplies. We discuss the likely sources of false positives, how dust can and will contaminate our search, and prospects for distinguishing dust from alien waste heat. We argue that galaxy-spanning civilizations may be easier to distinguish from natural sources than circumstellar civilizations (i.e., Dyson spheres), although GAIA will significantly improve our capability to identify the latter. We present a zeroth order null result of our search based on the WISE all-sky catalog: we show, for the first time, that Kardashev Type III civilizations (as Kardashev originally defined them) are very rare in the local universe. More sophisticated searches can extend our methodology to smaller waste heat luminosities, and potentially entirely rule out (or detect) both Kardashev Type III civilizations and new physics that allows for unlimited "free" energy generation.

  4. Data Content and Exchange in General Practice: a Review

    PubMed Central

    Kalankesh, Leila R; Farahbakhsh, Mostafa; Rahimi, Niloofar

    2014-01-01

    Background: efficient communication of data is inevitable requirement for general practice. Any issue in data content and its exchange among GP and other related entities hinders continuity of patient care. Methods: literature search for this review was conducted on three electronic databases including Medline, Scopus and Science Direct. Results: through reviewing papers, we extracted information on the GP data content, use cases of GP information exchange, its participants, tools and methods, incentives and barriers. Conclusion: considering importance of data content and exchange for GP systems, it seems that more research is needed to be conducted toward providing a comprehensive framework for data content and exchange in GP systems. PMID:25648317

  5. Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao

    2017-11-01

    Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.

  6. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ

    PubMed Central

    2012-01-01

    Background The syntheses of multiple qualitative studies can pull together data across different contexts, generate new theoretical or conceptual models, identify research gaps, and provide evidence for the development, implementation and evaluation of health interventions. This study aims to develop a framework for reporting the synthesis of qualitative health research. Methods We conducted a comprehensive search for guidance and reviews relevant to the synthesis of qualitative research, methodology papers, and published syntheses of qualitative health research in MEDLINE, Embase, CINAHL and relevant organisational websites to May 2011. Initial items were generated inductively from guides to synthesizing qualitative health research. The preliminary checklist was piloted against forty published syntheses of qualitative research, purposively selected to capture a range of year of publication, methods and methodologies, and health topics. We removed items that were duplicated, impractical to assess, and rephrased items for clarity. Results The Enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) statement consists of 21 items grouped into five main domains: introduction, methods and methodology, literature search and selection, appraisal, and synthesis of findings. Conclusions The ENTREQ statement can help researchers to report the stages most commonly associated with the synthesis of qualitative health research: searching and selecting qualitative research, quality appraisal, and methods for synthesising qualitative findings. The synthesis of qualitative research is an expanding and evolving methodological area and we would value feedback from all stakeholders for the continued development and extension of the ENTREQ statement. PMID:23185978

  7. Priority setting for health technology assessments: a systematic review of current practical approaches.

    PubMed

    Noorani, Hussein Z; Husereau, Donald R; Boudreau, Rhonda; Skidmore, Becky

    2007-01-01

    This study sought to identify and compare various practical and current approaches of health technology assessment (HTA) priority setting. A literature search was performed across PubMed, MEDLINE, EMBASE, BIOSIS, and Cochrane. Given an earlier review conducted by European agencies (EUR-ASSESS project), the search was limited to literature indexed from 1996 onward. We also searched Web sites of HTA agencies as well as HTAi and ISTAHC conference abstracts. Agency representatives were contacted for information about their priority-setting processes. Reports on practical approaches selected through these sources were identified independently by two reviewers. A total of twelve current priority-setting frameworks from eleven agencies were identified. Ten countries were represented: Canada, Denmark, England, Hungary, Israel, Scotland, Spain, Sweden, The Netherlands, and United States. Fifty-nine unique HTA priority-setting criteria were divided into eleven categories (alternatives; budget impact; clinical impact; controversial nature of proposed technology; disease burden; economic impact; ethical, legal, or psychosocial implications; evidence; interest; timeliness of review; variation in rates of use). Differences across HTA agencies were found regarding procedures for categorizing, scoring, and weighing of policy criteria. Variability exists in the methods for priority setting of health technology assessment across HTA agencies. Quantitative rating methods and consideration of cost benefit for priority setting were seldom used. These study results will assist HTA agencies that are re-visiting or developing their prioritization methods.

  8. Hash Bit Selection for Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Junfeng He; Shih-Fu Chang

    2017-11-01

    To overcome the barrier of storage and computation when dealing with gigantic-scale data sets, compact hashing has been studied extensively to approximate the nearest neighbor search. Despite the recent advances, critical design issues remain open in how to select the right features, hashing algorithms, and/or parameter settings. In this paper, we address these by posing an optimal hash bit selection problem, in which an optimal subset of hash bits are selected from a pool of candidate bits generated by different features, algorithms, or parameters. Inspired by the optimization criteria used in existing hashing algorithms, we adopt the bit reliability and their complementarity as the selection criteria that can be carefully tailored for hashing performance in different tasks. Then, the bit selection solution is discovered by finding the best tradeoff between search accuracy and time using a modified dynamic programming method. To further reduce the computational complexity, we employ the pairwise relationship among hash bits to approximate the high-order independence property, and formulate it as an efficient quadratic programming method that is theoretically equivalent to the normalized dominant set problem in a vertex- and edge-weighted graph. Extensive large-scale experiments have been conducted under several important application scenarios of hash techniques, where our bit selection framework can achieve superior performance over both the naive selection methods and the state-of-the-art hashing algorithms, with significant accuracy gains ranging from 10% to 50%, relatively.

  9. A depth-first search algorithm to compute elementary flux modes by linear programming

    PubMed Central

    2014-01-01

    Background The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Results Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. Conclusions The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints. PMID:25074068

  10. Metrics for comparing neuronal tree shapes based on persistent homology.

    PubMed

    Li, Yanjie; Wang, Dingkang; Ascoli, Giorgio A; Mitra, Partha; Wang, Yusu

    2017-01-01

    As more and more neuroanatomical data are made available through efforts such as NeuroMorpho.Org and FlyCircuit.org, the need to develop computational tools to facilitate automatic knowledge discovery from such large datasets becomes more urgent. One fundamental question is how best to compare neuron structures, for instance to organize and classify large collection of neurons. We aim to develop a flexible yet powerful framework to support comparison and classification of large collection of neuron structures efficiently. Specifically we propose to use a topological persistence-based feature vectorization framework. Existing methods to vectorize a neuron (i.e, convert a neuron to a feature vector so as to support efficient comparison and/or searching) typically rely on statistics or summaries of morphometric information, such as the average or maximum local torque angle or partition asymmetry. These simple summaries have limited power in encoding global tree structures. Based on the concept of topological persistence recently developed in the field of computational topology, we vectorize each neuron structure into a simple yet informative summary. In particular, each type of information of interest can be represented as a descriptor function defined on the neuron tree, which is then mapped to a simple persistence-signature. Our framework can encode both local and global tree structure, as well as other information of interest (electrophysiological or dynamical measures), by considering multiple descriptor functions on the neuron. The resulting persistence-based signature is potentially more informative than simple statistical summaries (such as average/mean/max) of morphometric quantities-Indeed, we show that using a certain descriptor function will give a persistence-based signature containing strictly more information than the classical Sholl analysis. At the same time, our framework retains the efficiency associated with treating neurons as points in a simple Euclidean feature space, which would be important for constructing efficient searching or indexing structures over them. We present preliminary experimental results to demonstrate the effectiveness of our persistence-based neuronal feature vectorization framework.

  11. Metrics for comparing neuronal tree shapes based on persistent homology

    PubMed Central

    Li, Yanjie; Wang, Dingkang; Ascoli, Giorgio A.; Mitra, Partha

    2017-01-01

    As more and more neuroanatomical data are made available through efforts such as NeuroMorpho.Org and FlyCircuit.org, the need to develop computational tools to facilitate automatic knowledge discovery from such large datasets becomes more urgent. One fundamental question is how best to compare neuron structures, for instance to organize and classify large collection of neurons. We aim to develop a flexible yet powerful framework to support comparison and classification of large collection of neuron structures efficiently. Specifically we propose to use a topological persistence-based feature vectorization framework. Existing methods to vectorize a neuron (i.e, convert a neuron to a feature vector so as to support efficient comparison and/or searching) typically rely on statistics or summaries of morphometric information, such as the average or maximum local torque angle or partition asymmetry. These simple summaries have limited power in encoding global tree structures. Based on the concept of topological persistence recently developed in the field of computational topology, we vectorize each neuron structure into a simple yet informative summary. In particular, each type of information of interest can be represented as a descriptor function defined on the neuron tree, which is then mapped to a simple persistence-signature. Our framework can encode both local and global tree structure, as well as other information of interest (electrophysiological or dynamical measures), by considering multiple descriptor functions on the neuron. The resulting persistence-based signature is potentially more informative than simple statistical summaries (such as average/mean/max) of morphometric quantities—Indeed, we show that using a certain descriptor function will give a persistence-based signature containing strictly more information than the classical Sholl analysis. At the same time, our framework retains the efficiency associated with treating neurons as points in a simple Euclidean feature space, which would be important for constructing efficient searching or indexing structures over them. We present preliminary experimental results to demonstrate the effectiveness of our persistence-based neuronal feature vectorization framework. PMID:28809960

  12. Towards a common oil spill risk assessment framework – Adapting ISO 31000 and addressing uncertainties.

    PubMed

    Sepp Neves, Antonio Augusto; Pinardi, Nadia; Martins, Flavio; Janeiro, Joao; Samaras, Achilleas; Zodiatis, George; De Dominicis, Michela

    2015-08-15

    Oil spills are a transnational problem, and establishing a common standard methodology for Oil Spill Risk Assessments (OSRAs) is thus paramount in order to protect marine environments and coastal communities. In this study we firstly identified the strengths and weaknesses of the OSRAs carried out in various parts of the globe. We then searched for a generic and recognized standard, i.e. ISO 31000, in order to design a method to perform OSRAs in a scientific and standard way. The new framework was tested for the Lebanon oil spill that occurred in 2006 employing ensemble oil spill modeling to quantify the risks and uncertainties due to unknown spill characteristics. The application of the framework generated valuable visual instruments for the transparent communication of the risks, replacing the use of risk tolerance levels, and thus highlighting the priority areas to protect in case of an oil spill. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. The behaviour change wheel: A new method for characterising and designing behaviour change interventions

    PubMed Central

    2011-01-01

    Background Improving the design and implementation of evidence-based practice depends on successful behaviour change interventions. This requires an appropriate method for characterising interventions and linking them to an analysis of the targeted behaviour. There exists a plethora of frameworks of behaviour change interventions, but it is not clear how well they serve this purpose. This paper evaluates these frameworks, and develops and evaluates a new framework aimed at overcoming their limitations. Methods A systematic search of electronic databases and consultation with behaviour change experts were used to identify frameworks of behaviour change interventions. These were evaluated according to three criteria: comprehensiveness, coherence, and a clear link to an overarching model of behaviour. A new framework was developed to meet these criteria. The reliability with which it could be applied was examined in two domains of behaviour change: tobacco control and obesity. Results Nineteen frameworks were identified covering nine intervention functions and seven policy categories that could enable those interventions. None of the frameworks reviewed covered the full range of intervention functions or policies, and only a minority met the criteria of coherence or linkage to a model of behaviour. At the centre of a proposed new framework is a 'behaviour system' involving three essential conditions: capability, opportunity, and motivation (what we term the 'COM-B system'). This forms the hub of a 'behaviour change wheel' (BCW) around which are positioned the nine intervention functions aimed at addressing deficits in one or more of these conditions; around this are placed seven categories of policy that could enable those interventions to occur. The BCW was used reliably to characterise interventions within the English Department of Health's 2010 tobacco control strategy and the National Institute of Health and Clinical Excellence's guidance on reducing obesity. Conclusions Interventions and policies to change behaviour can be usefully characterised by means of a BCW comprising: a 'behaviour system' at the hub, encircled by intervention functions and then by policy categories. Research is needed to establish how far the BCW can lead to more efficient design of effective interventions. PMID:21513547

  14. A swarm intelligence framework for reconstructing gene networks: searching for biologically plausible architectures.

    PubMed

    Kentzoglanakis, Kyriakos; Poole, Matthew

    2012-01-01

    In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures.

  15. A meta-ethnography of the acculturation and socialization experiences of migrant care workers.

    PubMed

    Ho, Ken H M; Chiang, Vico C L

    2015-02-01

    To report a meta-ethnography of qualitative research studies exploring the acculturation and socialization experiences of migrant care workers. Migrant care workers are increasingly participating in health and social care in developed countries. There is a need to understand this increasingly socioculturally diversified workforce. A comprehensive search through 12 databases and a manual search of journals related to transculture for studies on socialization and acculturation experiences (published 1993-2013) was completed. The inclusion criteria were peer-reviewed studies on the acculturation or socialization experiences of migrant care workers published in English in any country, using a qualitative or mixed-methods approach. This meta-ethnography employed the seven-phase Noblit and Hare method with reciprocal translation, refutational synthesis and lines-of-argument to synthesize qualitative studies. Three main themes were identified: (a) schema for the migration dream: optimism; (b) the reality of the migration dream: so close, yet so far; and (c) resilience: from chaos to order. A general framework of motivated psychosocial and behavioural adaptation was proposed. This meta-ethnography also revealed the vulnerabilities of migrant nurses in the process of acculturation and socialization. The general framework of behavioural and psychosocial adaptation revealed factors that impede and facilitate behavioural and psychosocial changes. Strategies to enrich external and internal resources should be targeted at encouraging multiculturalism and at improving the psychosocial resources of migrant care workers. It is suggested that research investigating the prominence of nursing vulnerabilities be conducted. © 2014 John Wiley & Sons Ltd.

  16. The dimensions of nursing surveillance: a concept analysis

    PubMed Central

    Kelly, Lesly; Vincent, Deborah

    2011-01-01

    Aim This paper is a report of an analysis of the concept of nursing surveillance. Background Nursing surveillance, a primary function of acute care nurses, is critical to patient safety and outcomes. Although it has been associated with patient outcomes and organizational context of care, little knowledge has been generated about the conceptual and operational process of surveillance. Data sources A search using the CINAHL, Medline and PubMed databases was used to compile an international data set of 18 papers and 4 book chapters published from 1985 to 2009. Review methods Rodger’s evolutionary concept analysis techniques were used to analyse surveillance in a systems framework. This method focused the search to nursing surveillance (as opposed to other medical uses of the term) and used a theoretical framework to guide the analysis. Results The examination of the literature clarifies the multifaceted nature of nursing surveillance in the acute care setting. Surveillance involves purposeful and ongoing acquisition, interpretation and synthesis of patient data for clinical decision- making. Behavioural activities and multiple cognitive processes are used in surveillance in order for the nurse to make decisions for patient safety and health maintenance. A systems approach to the analysis also demonstrates how organizational characteristics and contextual factors influence the process in the acute care environment. Conclusion This conceptual analysis describes the nature of the surveillance process and clarifies the concept for effective communication and future use in health services research. PMID:21129007

  17. A Comparative Analysis of International Frameworks for 21st Century Competences: Implications for National Curriculum Policies

    ERIC Educational Resources Information Center

    Voogt, Joke; Roblin, Natalie Pareja

    2012-01-01

    National curricula need to change drastically to comply with the competences needed for the 21st century. In this paper eight frameworks describing 21st century competences were analysed. A comprehensive search for information about 21st century competences was conducted across the official websites of the selected frameworks, resulting in 32…

  18. An ITK framework for deterministic global optimization for medical image registration

    NASA Astrophysics Data System (ADS)

    Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.

    2006-03-01

    Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.

  19. Real People Don't Do Boolean: How To Teach End Users To Find High-Quality Information on the Internet.

    ERIC Educational Resources Information Center

    Vine, Rita

    2001-01-01

    Explains how to train users in effective Web searching. Discusses challenges of teaching Web information retrieval; a framework for information searching; choosing the right search tools for users; the seven-step lesson planning process; tips for delivering group Internet training; and things that help people work faster and smarter on the Web.…

  20. Quality Dimensions of Internet Search Engines.

    ERIC Educational Resources Information Center

    Xie, M.; Wang, H.; Goh, T. N.

    1998-01-01

    Reviews commonly used search engines (AltaVista, Excite, infoseek, Lycos, HotBot, WebCrawler), focusing on existing comparative studies; considers quality dimensions from the customer's point of view based on a SERVQUAL framework; and groups these quality expectations in five dimensions: tangibles, reliability, responsiveness, assurance, and…

  1. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  2. Systematic data ingratiation of clinical trial recruitment locations for geographic-based query and visualization

    PubMed Central

    Luo, Jake; Chen, Weiheng; Wu, Min; Weng, Chunhua

    2018-01-01

    Background Prior studies of clinical trial planning indicate that it is crucial to search and screen recruitment sites before starting to enroll participants. However, currently there is no systematic method developed to support clinical investigators to search candidate recruitment sites according to their interested clinical trial factors. Objective In this study, we aim at developing a new approach to integrating the location data of over one million heterogeneous recruitment sites that are stored in clinical trial documents. The integrated recruitment location data can be searched and visualized using a map-based information retrieval method. The method enables systematic search and analysis of recruitment sites across a large amount of clinical trials. Methods The location data of more than 1.4 million recruitment sites of over 183,000 clinical trials was normalized and integrated using a geocoding method. The integrated data can be used to support geographic information retrieval of recruitment sites. Additionally, the information of over 6000 clinical trial target disease conditions and close to 4000 interventions was also integrated into the system and linked to the recruitment locations. Such data integration enabled the construction of a novel map-based query system. The system will allow clinical investigators to search and visualize candidate recruitment sites for clinical trials based on target conditions and interventions. Results The evaluation results showed that the coverage of the geographic location mapping for the 1.4 million recruitment sites was 99.8%. The evaluation of 200 randomly retrieved recruitment sites showed that the correctness of geographic information mapping was 96.5%. The recruitment intensities of the top 30 countries were also retrieved and analyzed. The data analysis results indicated that the recruitment intensity varied significantly across different countries and geographic areas. Conclusion This study contributed a new data processing framework to extract and integrate the location data of heterogeneous recruitment sites from clinical trial documents. The developed system can support effective retrieval and analysis of potential recruitment sites using target clinical trial factors. PMID:29132636

  3. Development of flood routing simulation system of digital Qingjiang based on integrated spatial information technology

    NASA Astrophysics Data System (ADS)

    Yuan, Yanbin; Zhou, You; Zhu, Yaqiong; Yuan, Xiaohui; Sælthun, N. R.

    2007-11-01

    Based on digital technology, flood routing simulation system development is an important component of "digital catchment". Taking QingJiang catchment as a pilot case, in-depth analysis on informatization of Qingjiang catchment management being the basis, aiming at catchment data's multi-source, - dimension, -element, -subject, -layer and -class feature, the study brings the design thought and method of "subject-point-source database" (SPSD) to design system structure in order to realize the unified management of catchments data in great quantity. Using the thought of integrated spatial information technology for reference, integrating hierarchical structure development model of digital catchment is established. The model is general framework of the flood routing simulation system analysis, design and realization. In order to satisfy the demands of flood routing three-dimensional simulation system, the object-oriented spatial data model are designed. We can analyze space-time self-adapting relation between flood routing and catchments topography, express grid data of terrain by using non-directed graph, apply breadth first search arithmetic, set up search method for the purpose of dynamically searching stream channel on the basis of simulated three-dimensional terrain. The system prototype is therefore realized. Simulation results have demonstrated that the proposed approach is feasible and effective in the application.

  4. Highly Asynchronous VisitOr Queue Graph Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearce, R.

    2012-10-01

    HAVOQGT is a C++ framework that can be used to create highly parallel graph traversal algorithms. The framework stores the graph and algorithmic data structures on external memory that is typically mapped to high performance locally attached NAND FLASH arrays. The framework supports a vertex-centered visitor programming model. The frameworkd has been used to implement breadth first search, connected components, and single source shortest path.

  5. eHealth Search Patterns: A Comparison of Private and Public Health Care Markets Using Online Panel Data.

    PubMed

    Schneider, Janina Anne; Holland, Christopher Patrick

    2017-04-13

    Patient and consumer access to eHealth information is of crucial importance because of its role in patient-centered medicine and to improve knowledge about general aspects of health and medical topics. The objectives were to analyze and compare eHealth search patterns in a private (United States) and a public (United Kingdom) health care market. A new taxonomy of eHealth websites is proposed to organize the largest eHealth websites. An online measurement framework is developed that provides a precise and detailed measurement system. Online panel data are used to accurately track and analyze detailed search behavior across 100 of the largest eHealth websites in the US and UK health care markets. The health, medical, and lifestyle categories account for approximately 90% of online activity, and e-pharmacies, social media, and professional categories account for the remaining 10% of online activity. Overall search penetration of eHealth websites is significantly higher in the private (United States) than the public market (United Kingdom). Almost twice the number of eHealth users in the private market have adopted online search in the health and lifestyle categories and also spend more time per website than those in the public market. The use of medical websites for specific conditions is almost identical in both markets. The allocation of search effort across categories is similar in both the markets. For all categories, the vast majority of eHealth users only access one website within each category. Those that conduct a search of two or more websites display very narrow search patterns. All users spend relatively little time on eHealth, that is, 3-7 minutes per website. The proposed online measurement framework exploits online panel data to provide a powerful and objective method of analyzing and exploring eHealth behavior. The private health care system does appear to have an influence on eHealth search behavior in terms of search penetration and time spent per website in the health and lifestyle categories. Two explanations are offered: (1) the personal incentive of medical costs in the private market incentivizes users to conduct online search; and (2) health care information is more easily accessible through health care professionals in the United Kingdom compared with the United States. However, the use of medical websites is almost identical, suggesting that patients interested in a specific condition have a motivation to search and evaluate health information, irrespective of the health care market. The relatively low level of search in terms of the number of websites accessed and the average time per website raise important questions about the actual level of patient informedness in both the markets. Areas for future research are outlined. ©Janina Anne Schneider, Christopher Patrick Holland. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 13.04.2017.

  6. The Foundations Framework for Developing and Reporting New Models of Care for Multimorbidity

    PubMed Central

    Stokes, Jonathan; Man, Mei-See; Guthrie, Bruce; Mercer, Stewart W.; Salisbury, Chris; Bower, Peter

    2017-01-01

    PURPOSE Multimorbidity challenges health systems globally. New models of care are urgently needed to better manage patients with multimorbidity; however, there is no agreed framework for designing and reporting models of care for multimorbidity and their evaluation. METHODS Based on findings from a literature search to identify models of care for multimorbidity, we developed a framework to describe these models. We illustrate the application of the framework by identifying the focus and gaps in current models of care, and by describing the evolution of models over time. RESULTS Our framework describes each model in terms of its theoretical basis and target population (the foundations of the model) and of the elements of care implemented to deliver the model. We categorized elements of care into 3 types: (1) clinical focus, (2) organization of care, (3) support for model delivery. Application of the framework identified a limited use of theory in model design and a strong focus on some patient groups (elderly, high users) more than others (younger patients, deprived populations). We found changes in elements with time, with a decrease in models implementing home care and an increase in models offering extended appointments. CONCLUSIONS By encouragin greater clarity about the underpinning theory and target population, and by categorizing the wide range of potentially important elements of an intervention to improve care for patients with multimorbidity, the framework may be useful in designing and reporting models of care and help advance the currently limited evidence base. PMID:29133498

  7. Data-driven model-independent searches for long-lived particles at the LHC

    NASA Astrophysics Data System (ADS)

    Coccaro, Andrea; Curtin, David; Lubatti, H. J.; Russell, Heather; Shelton, Jessie

    2016-12-01

    Neutral long-lived particles (LLPs) are highly motivated by many beyond the Standard Model scenarios, such as theories of supersymmetry, baryogenesis, and neutral naturalness, and present both tremendous discovery opportunities and experimental challenges for the LHC. A major bottleneck for current LLP searches is the prediction of Standard Model backgrounds, which are often impossible to simulate accurately. In this paper, we propose a general strategy for obtaining differential, data-driven background estimates in LLP searches, thereby notably extending the range of LLP masses and lifetimes that can be discovered at the LHC. We focus on LLPs decaying in the ATLAS muon system, where triggers providing both signal and control samples are available at LHC run 2. While many existing searches require two displaced decays, a detailed knowledge of backgrounds will allow for very inclusive searches that require just one detected LLP decay. As we demonstrate for the h →X X signal model of LLP pair production in exotic Higgs decays, this results in dramatic sensitivity improvements for proper lifetimes ≳10 m . In theories of neutral naturalness, this extends reach to glueball masses far below the b ¯b threshold. Our strategy readily generalizes to other signal models and other detector subsystems. This framework therefore lends itself to the development of a systematic, model-independent LLP search program, in analogy to the highly successful simplified-model framework of prompt searches.

  8. A rapid place name locating algorithm based on ontology qualitative retrieval, ranking and recommendation

    NASA Astrophysics Data System (ADS)

    Fan, Hong; Zhu, Anfeng; Zhang, Weixia

    2015-12-01

    In order to meet the rapid positioning of 12315 complaints, aiming at the natural language expression of telephone complaints, a semantic retrieval framework is proposed which is based on natural language parsing and geographical names ontology reasoning. Among them, a search result ranking and recommended algorithms is proposed which is regarding both geo-name conceptual similarity and spatial geometry relation similarity. The experiments show that this method can assist the operator to quickly find location of 12,315 complaints, increased industry and commerce customer satisfaction.

  9. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology.

    PubMed

    Penas, David R; Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R

    2017-01-01

    We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling.

  10. LIFESPAN: A tool for the computer-aided design of longitudinal studies

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Hertzog, Christopher; Lindenberger, Ulman

    2015-01-01

    Researchers planning a longitudinal study typically search, more or less informally, a multivariate space of possible study designs that include dimensions such as the hypothesized true variance in change, indicator reliability, the number and spacing of measurement occasions, total study time, and sample size. The main search goal is to select a research design that best addresses the guiding questions and hypotheses of the planned study while heeding applicable external conditions and constraints, including time, money, feasibility, and ethical considerations. Because longitudinal study selection ultimately requires optimization under constraints, it is amenable to the general operating principles of optimization in computer-aided design. Based on power equivalence theory (MacCallum et al., 2010; von Oertzen, 2010), we propose a computational framework to promote more systematic searches within the study design space. Starting with an initial design, the proposed framework generates a set of alternative models with equal statistical power to detect hypothesized effects, and delineates trade-off relations among relevant parameters, such as total study time and the number of measurement occasions. We present LIFESPAN (Longitudinal Interactive Front End Study Planner), which implements this framework. LIFESPAN boosts the efficiency, breadth, and precision of the search for optimal longitudinal designs. Its initial version, which is freely available at http://www.brandmaier.de/lifespan, is geared toward the power to detect variance in change as specified in a linear latent growth curve model. PMID:25852596

  11. A Patient-Centered Framework for Evaluating Digital Maturity of Health Services: A Systematic Review

    PubMed Central

    Callahan, Ryan; Darzi, Ara; Mayer, Erik

    2016-01-01

    Background Digital maturity is the extent to which digital technologies are used as enablers to deliver a high-quality health service. Extensive literature exists about how to assess the components of digital maturity, but it has not been used to design a comprehensive framework for evaluation. Consequently, the measurement systems that do exist are limited to evaluating digital programs within one service or care setting, meaning that digital maturity evaluation is not accounting for the needs of patients across their care pathways. Objective The objective of our study was to identify the best methods and metrics for evaluating digital maturity and to create a novel, evidence-based tool for evaluating digital maturity across patient care pathways. Methods We systematically reviewed the literature to find the best methods and metrics for evaluating digital maturity. We searched the PubMed database for all papers relevant to digital maturity evaluation. Papers were selected if they provided insight into how to appraise digital systems within the health service and if they indicated the factors that constitute or facilitate digital maturity. Papers were analyzed to identify methodology for evaluating digital maturity and indicators of digitally mature systems. We then used the resulting information about methodology to design an evaluation framework. Following that, the indicators of digital maturity were extracted and grouped into increasing levels of maturity and operationalized as metrics within the evaluation framework. Results We identified 28 papers as relevant to evaluating digital maturity, from which we derived 5 themes. The first theme concerned general evaluation methodology for constructing the framework (7 papers). The following 4 themes were the increasing levels of digital maturity: resources and ability (6 papers), usage (7 papers), interoperability (3 papers), and impact (5 papers). The framework includes metrics for each of these levels at each stage of the typical patient care pathway. Conclusions The framework uses a patient-centric model that departs from traditional service-specific measurements and allows for novel insights into how digital programs benefit patients across the health system. Trial Registration N/A PMID:27080852

  12. Mapping Informative Clusters in a Hierarchial Framework of fMRI Multivariate Analysis

    PubMed Central

    Xu, Rui; Zhen, Zonglei; Liu, Jia

    2010-01-01

    Pattern recognition methods have become increasingly popular in fMRI data analysis, which are powerful in discriminating between multi-voxel patterns of brain activities associated with different mental states. However, when they are used in functional brain mapping, the location of discriminative voxels varies significantly, raising difficulties in interpreting the locus of the effect. Here we proposed a hierarchical framework of multivariate approach that maps informative clusters rather than voxels to achieve reliable functional brain mapping without compromising the discriminative power. In particular, we first searched for local homogeneous clusters that consisted of voxels with similar response profiles. Then, a multi-voxel classifier was built for each cluster to extract discriminative information from the multi-voxel patterns. Finally, through multivariate ranking, outputs from the classifiers were served as a multi-cluster pattern to identify informative clusters by examining interactions among clusters. Results from both simulated and real fMRI data demonstrated that this hierarchical approach showed better performance in the robustness of functional brain mapping than traditional voxel-based multivariate methods. In addition, the mapped clusters were highly overlapped for two perceptually equivalent object categories, further confirming the validity of our approach. In short, the hierarchical framework of multivariate approach is suitable for both pattern classification and brain mapping in fMRI studies. PMID:21152081

  13. An Extensible "SCHEMA-LESS" Database Framework for Managing High-Throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  14. An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.

  15. Hybrid Optimization Parallel Search PACKage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  16. Pattern search in multi-structure data: a framework for the next-generation evidence-based medicine

    NASA Astrophysics Data System (ADS)

    Sukumar, Sreenivas R.; Ainsworth, Keela C.

    2014-03-01

    With the impetus towards personalized and evidence-based medicine, the need for a framework to analyze/interpret quantitative measurements (blood work, toxicology, etc.) with qualitative descriptions (specialist reports after reading images, bio-medical knowledgebase, etc.) to predict diagnostic risks is fast emerging. Addressing this need, we pose and answer the following questions: (i) How can we jointly analyze and explore measurement data in context with qualitative domain knowledge? (ii) How can we search and hypothesize patterns (not known apriori) from such multi-structure data? (iii) How can we build predictive models by integrating weakly-associated multi-relational multi-structure data? We propose a framework towards answering these questions. We describe a software solution that leverages hardware for scalable in-memory analytics and applies next-generation semantic query tools on medical data.

  17. Discovering Tradeoffs, Vulnerabilities, and Dependencies within Water Resources Systems

    NASA Astrophysics Data System (ADS)

    Reed, P. M.

    2015-12-01

    There is a growing recognition and interest in using emerging computational tools for discovering the tradeoffs that emerge across complex combinations infrastructure options, adaptive operations, and sign posts. As a field concerned with "deep uncertainties", it is logically consistent to include a more direct acknowledgement that our choices for dealing with computationally demanding simulations, advanced search algorithms, and sensitivity analysis tools are themselves subject to failures that could adversely bias our understanding of how systems' vulnerabilities change with proposed actions. Balancing simplicity versus complexity in our computational frameworks is nontrivial given that we are often exploring high impact irreversible decisions. It is not always clear that accepted models even encompass important failure modes. Moreover as they become more complex and computationally demanding the benefits and consequences of simplifications are often untested. This presentation discusses our efforts to address these challenges through our "many-objective robust decision making" (MORDM) framework for the design and management water resources systems. The MORDM framework has four core components: (1) elicited problem conception and formulation, (2) parallel many-objective search, (3) interactive visual analytics, and (4) negotiated selection of robust alternatives. Problem conception and formulation is the process of abstracting a practical design problem into a mathematical representation. We build on the emerging work in visual analytics to exploit interactive visualization of both the design space and the objective space in multiple heterogeneous linked views that permit exploration and discovery. Many-objective search produces tradeoff solutions from potentially competing problem formulations that can each consider up to ten conflicting objectives based on current computational search capabilities. Negotiated design selection uses interactive visualization, reformulation, and optimization to discover desirable designs for implementation. Multi-city urban water supply portfolio planning will be used to illustrate the MORDM framework.

  18. Understanding the processes of writing papers reflectively.

    PubMed

    Regmi, Krishna; Naidoo, Jennie

    2013-07-01

    This paper explores the writing of research papers using reflective frameworks. Reflective practice is integral to professional education and development. However, healthcare students, academics and practitioners have given limited attention to how to write reflectively. In addition, there are limited resources on the practical aspects of writing papers reflectively. The following major databases were searched: PubMed, Medline, King's Library, Excerpta Medica Database, Department of Health database, Cumulative Index to Nursing and Allied Health Literature. The searches were conducted using 'free text' and 'index' terms. Only relevant papers published in English were reviewed and scrutinised. Unpublished reports, internal publications, snowballing from the reference lists and personal contacts were also included in the search. This is a review paper that critiques the frameworks used for reflective practice. Writing papers reflectively is a complex task. Healthcare professionals and researchers need to understand the meaning of reflection and make appropriate use of reflective frameworks. Demystifying the process of reflectively writing papers will help professionals develop skills and competencies. IMPLICATION FOR RESEARCH/PRACTICE: This article provides a practical guide to reflection and how nursing and allied healthcare students, academics and practitioners can practise it. The paper identifies four generic stages in frameworks: description, assessment, evaluation and action, which are illustrated by annotated 'skeletal' examples. It is hoped that this will assist the process of reflective practice, writing and learning.

  19. Lessons Learned from Development of De-identification System for Biomedical Research in a Korean Tertiary Hospital.

    PubMed

    Shin, Soo-Yong; Lyu, Yongman; Shin, Yongdon; Choi, Hyo Joung; Park, Jihyun; Kim, Woo-Sung; Lee, Jae Ho

    2013-06-01

    The Korean government has enacted two laws, namely, the Personal Information Protection Act and the Bioethics and Safety Act to prevent the unauthorized use of medical information. To protect patients' privacy by complying with governmental regulations and improve the convenience of research, Asan Medical Center has been developing a de-identification system for biomedical research. We reviewed Korean regulations to define the scope of the de-identification methods and well-known previous biomedical research platforms to extract the functionalities of the systems. Based on these review results, we implemented necessary programs based on the Asan Medical Center Information System framework which was built using the Microsoft. NET Framework and C#. The developed de-identification system comprises three main components: a de-identification tool, a search tool, and a chart review tool. The de-identification tool can substitute a randomly assigned research ID for a hospital patient ID, remove the identifiers in the structured format, and mask them in the unstructured format, i.e., texts. This tool achieved 98.14% precision and 97.39% recall for 6,520 clinical notes. The search tool can find the number of patients which satisfies given search criteria. The chart review tool can provide de-identified patient's clinical data for review purposes. We found that a clinical data warehouse was essential for successful implementation of the de-identification system, and this system should be tightly linked to an electronic Institutional Review Board system for easy operation of honest brokers. Additionally, we found that a secure cloud environment could be adopted to protect patients' privacy more thoroughly.

  20. Optimally setting up directed searches for continuous gravitational waves in Advanced LIGO O1 data

    NASA Astrophysics Data System (ADS)

    Ming, Jing; Papa, Maria Alessandra; Krishnan, Badri; Prix, Reinhard; Beer, Christian; Zhu, Sylvia J.; Eggenstein, Heinz-Bernd; Bock, Oliver; Machenschalk, Bernd

    2018-02-01

    In this paper we design a search for continuous gravitational waves from three supernova remnants: Vela Jr., Cassiopeia A (Cas A) and G347.3. These systems might harbor rapidly rotating neutron stars emitting quasiperiodic gravitational radiation detectable by the advanced LIGO detectors. Our search is designed to use the volunteer computing project Einstein@Home for a few months and assumes the sensitivity and duty cycles of the advanced LIGO detectors during their first science run. For all three supernova remnants, the sky positions of their central compact objects are well known but the frequency and spin-down rates of the neutron stars are unknown which makes the searches computationally limited. In a previous paper we have proposed a general framework for deciding on what target we should spend computational resources and in what proportion, what frequency and spin-down ranges we should search for every target, and with what search setup. Here we further expand this framework and apply it to design a search directed at detecting continuous gravitational wave signals from the most promising three supernova remnants identified as such in the previous work. Our optimization procedure yields broad frequency and spin-down searches for all three objects, at an unprecedented level of sensitivity: The smallest detectable gravitational wave strain h0 for Cas A is expected to be 2 times smaller than the most sensitive upper limits published to date, and our proposed search, which was set up and ran on the volunteer computing project Einstein@Home, covers a much larger frequency range.

  1. Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.

    PubMed

    Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas

    2017-07-24

    Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.

  2. Understanding Air Transportation Market Dynamics Using a Search Algorithm for Calibrating Travel Demand and Price

    NASA Technical Reports Server (NTRS)

    Kumar, Vivek; Horio, Brant M.; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.

    2015-01-01

    This paper presents a search algorithm based framework to calibrate origin-destination (O-D) market specific airline ticket demands and prices for the Air Transportation System (ATS). This framework is used for calibrating an agent based model of the air ticket buy-sell process - Airline Evolutionary Simulation (Airline EVOS) -that has fidelity of detail that accounts for airline and consumer behaviors and the interdependencies they share between themselves and the NAS. More specificially, this algorithm simultaneous calibrates demand and airfares for each O-D market, to within specified threshold of a pre-specified target value. The proposed algorithm is illustrated with market data targets provided by the Transportation System Analysis Model (TSAM) and Airline Origin and Destination Survey (DB1B). Although we specify these models and datasources for this calibration exercise, the methods described in this paper are applicable to calibrating any low-level model of the ATS to some other demand forecast model-based data. We argue that using a calibration algorithm such as the one we present here to synchronize ATS models with specialized forecast demand models, is a powerful tool for establishing credible baseline conditions in experiments analyzing the effects of proposed policy changes to the ATS.

  3. Indicators and Measurement Tools for Health Systems Integration: A Knowledge Synthesis

    PubMed Central

    Oelke, Nelly D.; da Silva Lima, Maria Alice Dias; Stiphout, Michelle; Janke, Robert; Witt, Regina Rigatto; Van Vliet-Brown, Cheryl; Schill, Kaela; Rostami, Mahnoush; Hepp, Shelanne; Birney, Arden; Al-Roubaiai, Fatima; Marques, Giselda Quintana

    2017-01-01

    Background: Despite far reaching support for integrated care, conceptualizing and measuring integrated care remains challenging. This knowledge synthesis aimed to identify indicator domains and tools to measure progress towards integrated care. Methods: We used an established framework and a Delphi survey with integration experts to identify relevant measurement domains. For each domain, we searched and reviewed the literature for relevant tools. Findings: From 7,133 abstracts, we retrieved 114 unique tools. We found many quality tools to measure care coordination, patient engagement and team effectiveness/performance. In contrast, there were few tools in the domains of performance measurement and information systems, alignment of organizational goals and resource allocation. The search yielded 12 tools that measure overall integration or three or more indicator domains. Discussion: Our findings highlight a continued gap in tools to measure foundational components that support integrated care. In the absence of such targeted tools, “overall integration” tools may be useful for a broad assessment of the overall state of a system. Conclusions: Continued progress towards integrated care depends on our ability to evaluate the success of strategies across different levels and context. This study has identified 114 tools that measure integrated care across 16 domains, supporting efforts towards a unified measurement framework. PMID:29588637

  4. Wavelength band selection method for multispectral target detection.

    PubMed

    Karlholm, Jörgen; Renhorn, Ingmar

    2002-11-10

    A framework is proposed for the selection of wavelength bands for multispectral sensors by use of hyperspectral reference data. Using the results from the detection theory we derive a cost function that is minimized by a set of spectral bands optimal in terms of detection performance for discrimination between a class of small rare targets and clutter with known spectral distribution. The method may be used, e.g., in the design of multispectral infrared search and track and electro-optical missile warning sensors, where a low false-alarm rate and a high-detection probability for detection of small targets against a clutter background are of critical importance, but the required high frame rate prevents the use of hyperspectral sensors.

  5. Human Resource Development, Social Capital, Emotional Intelligence: Any Link to Productivity?

    ERIC Educational Resources Information Center

    Brooks, Kit; Nafukho, Fredrick Muyia

    2006-01-01

    Purpose: This article aims to offer a theoretical framework that attempts to show the integration among human resource development (HRD), social capital (SC), emotional intelligence (EI) and organizational productivity. Design/methodology/approach: The literature search included the following: a computerized search of accessible and available…

  6. Ontology Mappings to Improve Learning Resource Search

    ERIC Educational Resources Information Center

    Gasevic, Dragan; Hatala, Marek

    2006-01-01

    This paper proposes an ontology mapping-based framework that allows searching for learning resources using multiple ontologies. The present applications of ontologies in e-learning use various ontologies (eg, domain, curriculum, context), but they do not give a solution on how to interoperate e-learning systems based on different ontologies. The…

  7. Online Searching in PBL Tutorials

    ERIC Educational Resources Information Center

    Jin, Jun; Bridges, Susan M.; Botelho, Michael G.; Chan, Lap Ki

    2015-01-01

    This study aims to explore how online searching plays a role during PBL tutorials in two undergraduate health sciences curricula, Medicine and Dentistry. Utilizing Interactional Ethnography (IE) as an organizing framework for data collection and analysis, and drawing on a critical theory of technology as an explanatory lens, enabled a textured…

  8. CAD-CAM database management at Bendix Kansas City

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witte, D.R.

    1985-05-01

    The Bendix Kansas City Division of Allied Corporation began integrating mechanical CAD-CAM capabilities into its operations in June 1980. The primary capabilities include a wireframe modeling application, a solid modeling application, and the Bendix Integrated Computer Aided Manufacturing (BICAM) System application, a set of software programs and procedures which provides user-friendly access to graphic applications and data, and user-friendly sharing of data between applications and users. BICAM also provides for enforcement of corporate/enterprise policies. Three access categories, private, local, and global, are realized through the implementation of data-management metaphors: the desk, reading rack, file cabinet, and library are for themore » storage, retrieval, and sharing of drawings and models. Access is provided through menu selections; searching for designs is done by a paging method or a search-by-attribute-value method. The sharing of designs between all users of Part Data is key. The BICAM System supports 375 unique users per quarter and manages over 7500 drawings and models. The BICAM System demonstrates the need for generalized models, a high-level system framework, prototyping, information-modeling methods, and an understanding of the entire enterprise. Future BICAM System implementations are planned to take advantage of this knowledge.« less

  9. In search of tools to aid logical thinking and communicating about medical decision making.

    PubMed

    Hunink, M G

    2001-01-01

    To have real-time impact on medical decision making, decision analysts need a wide variety of tools to aid logical thinking and communication. Decision models provide a formal framework to integrate evidence and values, but they are commonly perceived as complex and difficult to understand by those unfamiliar with the methods, especially in the context of clinical decision making. The theory of constraints, introduced by Eliyahu Goldratt in the business world, provides a set of tools for logical thinking and communication that could potentially be useful in medical decision making. The author used the concept of a conflict resolution diagram to analyze the decision to perform carotid endarterectomy prior to coronary artery bypass grafting in a patient with both symptomatic coronary and asymptomatic carotid artery disease. The method enabled clinicians to visualize and analyze the issues, identify and discuss the underlying assumptions, search for the best available evidence, and use the evidence to make a well-founded decision. The method also facilitated communication among those involved in the care of the patient. Techniques from fields other than decision analysis can potentially expand the repertoire of tools available to support medical decision making and to facilitate communication in decision consults.

  10. Components of visual search in childhood-onset schizophrenia and attention-deficit/hyperactivity disorder.

    PubMed

    Karatekin, C; Asarnow, R F

    1998-10-01

    This study tested the hypotheses that visual search impairments in schizophrenia are due to a delay in initiation of search or a slow rate of serial search. We determined the specificity of these impairments by comparing children with schizophrenia to children with attention-deficit hyperactivity disorder (ADHD) and age-matched normal children. The hypotheses were tested within the framework of feature integration theory by administering children tasks tapping parallel and serial search. Search rate was estimated from the slope of the search functions, and duration of the initial stages of search from time to make the first saccade on each trial. As expected, manual response times were elevated in both clinical groups. Contrary to expectation, ADHD, but not schizophrenic, children were delayed in initiation of serial search. Finally, both groups showed a clear dissociation between intact parallel search rates and slowed serial search rates.

  11. Thermoelectricity in transition metal compounds: The role of spin disorder

    DOE PAGES

    Gorai, Prashun; Toberer, Eric S.; Stevanović, Vladan

    2016-11-01

    Here, at room temperature and above, most magnetic materials adopt a spin-disordered (paramagnetic) state whose electronic properties can differ significantly from their low-temperature, spin-ordered counterparts. Yet computational searches for new functional materials usually assume some type of magnetic order. In the present work, we demonstrate a methodology to incorporate spin disorder in computational searches and predict the electronic properties of the paramagnetic phase. We implement this method in a high-throughput framework to assess the potential for thermoelectric performance of 1350 transition-metal sulfides and find that all magnetic systems we identify as promising in the spin-ordered ground state cease to bemore » promising in the paramagnetic phase due to disorder-induced deterioration of the charge carrier transport properties. We also identify promising non-magnetic candidates that do not suffer from these spin disorder effects. In addition to identifying promising materials, our results offer insights into the apparent scarcity of magnetic systems among known thermoelectrics and highlight the importance of including spin disorder in computational searches.« less

  12. SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry

    NASA Astrophysics Data System (ADS)

    Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.

    2007-03-01

    A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.

  13. Bayesian screening for active compounds in high-dimensional chemical spaces combining property descriptors and molecular fingerprints.

    PubMed

    Vogt, Martin; Bajorath, Jürgen

    2008-01-01

    Bayesian classifiers are increasingly being used to distinguish active from inactive compounds and search large databases for novel active molecules. We introduce an approach to directly combine the contributions of property descriptors and molecular fingerprints in the search for active compounds that is based on a Bayesian framework. Conventionally, property descriptors and fingerprints are used as alternative features for virtual screening methods. Following the approach introduced here, probability distributions of descriptor values and fingerprint bit settings are calculated for active and database molecules and the divergence between the resulting combined distributions is determined as a measure of biological activity. In test calculations on a large number of compound activity classes, this methodology was found to consistently perform better than similarity searching using fingerprints and multiple reference compounds or Bayesian screening calculations using probability distributions calculated only from property descriptors. These findings demonstrate that there is considerable synergy between different types of property descriptors and fingerprints in recognizing diverse structure-activity relationships, at least in the context of Bayesian modeling.

  14. A Unified Framework for Monetary Theory and Policy Analysis.

    ERIC Educational Resources Information Center

    Lagos, Ricardo; Wright, Randall

    2005-01-01

    Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…

  15. Pattern Search in Multi-structure Data: A Framework for the Next-Generation Evidence-based Medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Ainsworth, Keela C

    With the advent of personalized and evidence-based medicine, the need for a framework to analyze/interpret quantitative measurements (blood work, toxicology, etc.) with qualitative descriptions (specialist reports after reading images, bio-medical knowledge-bases) to predict diagnostic risks is fast emerging. Addressing this need, we pose and address the following questions (i) How can we jointly analyze both qualitative and quantitative data ? (ii) Is the fusion of multi-structure data expected to provide better insights than either of them individually ? We present experiments on two bio-medical data sets - mammography and traumatic brain studies to demonstrate architectures and tools for evidence-pattern search.

  16. Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.

    PubMed

    Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting

    2018-02-12

    Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.

  17. Approach to risk identification in undifferentiated mental disorders

    PubMed Central

    Silveira, José; Rockman, Patricia; Fulford, Casey; Hunter, Jon

    2016-01-01

    Abstract Objective To provide primary care physicians with a novel approach to risk identification and related clinical decision making in the management of undifferentiated mental disorders. Sources of information We conducted a review of the literature in PubMed, CINAHL, PsycINFO, and Google Scholar using the search terms diagnostic uncertainty, diagnosis, risk identification, risk assessment/methods, risk, risk factors, risk management/methods, cognitive biases and psychiatry, decision making, mental disorders/diagnosis, clinical competence, evidence-based medicine, interviews as topic, psychiatry/education, psychiatry/methods, documentation/methods, forensic psychiatry/education, forensic psychiatry/methods, mental disorders/classification, mental disorders/psychology, violence/prevention and control, and violence/psychology. Main message Mental disorders are a large component of practice in primary care and often present in an undifferentiated manner, remaining so for prolonged periods. The challenging search for a diagnosis can divert attention from risk identification, as diagnosis is commonly presumed to be necessary before treatment can begin. This might inadvertently contribute to preventable adverse events. Focusing on salient aspects of the patient presentation related to risk should be prioritized. This article presents a novel approach to organizing patient information to assist risk identification and decision making in the management of patients with undifferentiated mental disorders. Conclusion A structured approach can help physicians to manage the clinical uncertainty common to risk identification in patients with mental disorders and cope with the common anxiety and cognitive biases that affect priorities in risk-related decision making. By focusing on risk, functional impairments, and related symptoms using a novel framework, physicians can meet their patients’ immediate needs while continuing the search for diagnostic clarity and long-term treatment. PMID:27965330

  18. Systematic review of methods for evaluating healthcare research economic impact

    PubMed Central

    2010-01-01

    Background The economic benefits of healthcare research require study so that appropriate resources can be allocated to this research, particularly in developing countries. As a first step, we performed a systematic review to identify the methods used to assess the economic impact of healthcare research, and the outcomes. Method An electronic search was conducted in relevant databases using a combination of specific keywords. In addition, 21 relevant Web sites were identified. Results The initial search yielded 8,416 articles. After studying titles, abstracts, and full texts, 18 articles were included in the analysis. Eleven other reports were found on Web sites. We found that the outcomes assessed as healthcare research payback included direct cost-savings, cost reductions in healthcare delivery systems, benefits from commercial advancement, and outcomes associated with improved health status. Two methods were used to study healthcare research payback: macro-economic studies, which examine the relationship between research studies and economic outcome at the aggregated level, and case studies, which examine specific research projects to assess economic impact. Conclusions Our study shows that different methods and outcomes can be used to assess the economic impacts of healthcare research. There is no unique methodological approach for the economic evaluation of such research. In our systematic search we found no research that had evaluated the economic return of research in low and middle income countries. We therefore recommend a consensus on practical guidelines at international level on the basis of more comprehensive methodologies (such as Canadian Academic of Health Science and payback frameworks) in order to build capacity, arrange for necessary informative infrastructures and promote necessary skills for economic evaluation studies. PMID:20196839

  19. Approach to risk identification in undifferentiated mental disorders.

    PubMed

    Silveira, José; Rockman, Patricia; Fulford, Casey; Hunter, Jon

    2016-12-01

    To provide primary care physicians with a novel approach to risk identification and related clinical decision making in the management of undifferentiated mental disorders. We conducted a review of the literature in PubMed, CINAHL, PsycINFO, and Google Scholar using the search terms diagnostic uncertainty, diagnosis, risk identification, risk assessment/methods, risk, risk factors, risk management/methods, cognitive biases and psychiatry, decision making, mental disorders/diagnosis, clinical competence, evidence-based medicine, interviews as topic, psychiatry/education, psychiatry/methods, documentation/methods, forensic psychiatry/education, forensic psychiatry/methods, mental disorders/classification, mental disorders/psychology, violence/prevention and control, and violence/psychology. Mental disorders are a large component of practice in primary care and often present in an undifferentiated manner, remaining so for prolonged periods. The challenging search for a diagnosis can divert attention from risk identification, as diagnosis is commonly presumed to be necessary before treatment can begin. This might inadvertently contribute to preventable adverse events. Focusing on salient aspects of the patient presentation related to risk should be prioritized. This article presents a novel approach to organizing patient information to assist risk identification and decision making in the management of patients with undifferentiated mental disorders. A structured approach can help physicians to manage the clinical uncertainty common to risk identification in patients with mental disorders and cope with the common anxiety and cognitive biases that affect priorities in risk-related decision making. By focusing on risk, functional impairments, and related symptoms using a novel framework, physicians can meet their patients' immediate needs while continuing the search for diagnostic clarity and long-term treatment. Copyright© the College of Family Physicians of Canada.

  20. Convalescing Cluster Configuration Using a Superlative Framework

    PubMed Central

    Sabitha, R.; Karthik, S.

    2015-01-01

    Competent data mining methods are vital to discover knowledge from databases which are built as a result of enormous growth of data. Various techniques of data mining are applied to obtain knowledge from these databases. Data clustering is one such descriptive data mining technique which guides in partitioning data objects into disjoint segments. K-means algorithm is a versatile algorithm among the various approaches used in data clustering. The algorithm and its diverse adaptation methods suffer certain problems in their performance. To overcome these issues a superlative algorithm has been proposed in this paper to perform data clustering. The specific feature of the proposed algorithm is discretizing the dataset, thereby improving the accuracy of clustering, and also adopting the binary search initialization method to generate cluster centroids. The generated centroids are fed as input to K-means approach which iteratively segments the data objects into respective clusters. The clustered results are measured for accuracy and validity. Experiments conducted by testing the approach on datasets from the UC Irvine Machine Learning Repository evidently show that the accuracy and validity measure is higher than the other two approaches, namely, simple K-means and Binary Search method. Thus, the proposed approach proves that discretization process will improve the efficacy of descriptive data mining tasks. PMID:26543895

  1. Evaluation of Conceptual Frameworks Applicable to the Study of Isolation Precautions Effectiveness

    PubMed Central

    Crawford, Catherine; Shang, Jingjing

    2015-01-01

    Aims A discussion of conceptual frameworks applicable to the study of isolation precautions effectiveness according to Fawcett and DeSanto-Madeya’s (2013) evaluation technique and their relative merits and drawbacks for this purpose Background Isolation precautions are recommended to control infectious diseases with high morbidity and mortality, but effectiveness is not established due to numerous methodological challenges. These challenges, such as identifying empirical indicators and refining operational definitions, could be alleviated though use of an appropriate conceptual framework. Design Discussion paper Data Sources In mid-April 2014, the primary author searched five electronic, scientific literature databases for conceptual frameworks applicable to study isolation precautions, without limiting searches by publication date. Implications for Nursing By reviewing promising conceptual frameworks to support isolation precautions effectiveness research, this paper exemplifies the process to choose an appropriate conceptual framework for empirical research. Hence, researchers may build on these analyses to improve study design of empirical research in multiple disciplines, which may lead to improved research and practice. Conclusion Three frameworks were reviewed: the epidemiologic triad of disease, Donabedian’s healthcare quality framework and the Quality Health Outcomes model. Each has been used in nursing research to evaluate health outcomes and contains concepts relevant to nursing domains. Which framework can be most useful likely depends on whether the study question necessitates testing multiple interventions, concerns pathogen-specific characteristics and yields cross-sectional or longitudinal data. The Quality Health Outcomes model may be slightly preferred as it assumes reciprocal relationships, multi-level analysis and is sensitive to cultural inputs. PMID:26179813

  2. A new pressure ulcer conceptual framework.

    PubMed

    Coleman, Susanne; Nixon, Jane; Keen, Justin; Wilson, Lyn; McGinnis, Elizabeth; Dealey, Carol; Stubbs, Nikki; Farrin, Amanda; Dowding, Dawn; Schols, Jos M G A; Cuddigan, Janet; Berlowitz, Dan; Jude, Edward; Vowden, Peter; Schoonhoven, Lisette; Bader, Dan L; Gefen, Amit; Oomens, Cees W J; Nelson, E Andrea

    2014-10-01

    This paper discusses the critical determinants of pressure ulcer development and proposes a new pressure ulcer conceptual framework. Recent work to develop and validate a new evidence-based pressure ulcer risk assessment framework was undertaken. This formed part of a Pressure UlceR Programme Of reSEarch (RP-PG-0407-10056), funded by the National Institute for Health Research. The foundation for the risk assessment component incorporated a systematic review and a consensus study that highlighted the need to propose a new conceptual framework. Discussion Paper. The new conceptual framework links evidence from biomechanical, physiological and epidemiological evidence, through use of data from a systematic review (search conducted March 2010), a consensus study (conducted December 2010-2011) and an international expert group meeting (conducted December 2011). A new pressure ulcer conceptual framework incorporating key physiological and biomechanical components and their impact on internal strains, stresses and damage thresholds is proposed. Direct and key indirect causal factors suggested in a theoretical causal pathway are mapped to the physiological and biomechanical components of the framework. The new proposed conceptual framework provides the basis for understanding the critical determinants of pressure ulcer development and has the potential to influence risk assessment guidance and practice. It could also be used to underpin future research to explore the role of individual risk factors conceptually and operationally. By integrating existing knowledge from epidemiological, physiological and biomechanical evidence, a theoretical causal pathway and new conceptual framework are proposed with potential implications for practice and research. © 2014 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.

  3. BEACON: A Summary Framework to Overcome Potential Reimbursement Hurdles.

    PubMed

    Dunlop, William C N; Mullins, C Daniel; Pirk, Olaf; Goeree, Ron; Postma, Maarten J; Enstone, Ashley; Heron, Louise

    2016-10-01

    To provide a framework for addressing payers' criteria during the development of pharmaceuticals. A conceptual framework was presented to an international health economic expert panel for discussion. A structured literature search (from 2010 to May 2015), using the following databases in Ovid: Medline(®) and Medline(®) In-Process (PubMed), Embase (Ovid), EconLit (EBSCOhost) and the National Health Service Economic Evaluation Database (NHS EED), and a 'grey literature' search, were conducted to identify existing criteria from the payer perspective. The criteria assessed by existing frameworks and guidelines were collated; the most commonly reported criteria were considered for inclusion in the framework. A mnemonic was conceived as a memory aide to summarise these criteria. Overall, 41 publications were identified as potentially relevant to the objective. Following further screening, 26 were excluded upon full-text review on the basis of no framework presented (n = 13), redundancy (n = 11) or abstract only (n = 2). Frameworks that captured criteria developed for or utilised by the pharmaceutical industry (n = 5) and reimbursement guidance (n = 10) were reviewed. The most commonly identified criteria-unmet need/patient burden, safety, efficacy, quality-of-life outcomes, environment, evidence quality, budget impact and comparator-were incorporated into the summary framework. For ease of communication, the following mnemonic was developed: BEACON (Burden/target population, Environment, Affordability/value, Comparator, Outcomes, Number of studies/quality of evidence). The BEACON framework aims to capture the 'essence' of payer requirements by addressing the most commonly described criteria requested by payers regarding the introduction of a new pharmaceutical.

  4. A new pressure ulcer conceptual framework

    PubMed Central

    Coleman, Susanne; Nixon, Jane; Keen, Justin; Wilson, Lyn; McGinnis, Elizabeth; Dealey, Carol; Stubbs, Nikki; Farrin, Amanda; Dowding, Dawn; Schols, Jos MGA; Cuddigan, Janet; Berlowitz, Dan; Jude, Edward; Vowden, Peter; Schoonhoven, Lisette; Bader, Dan L; Gefen, Amit; Oomens, Cees WJ; Nelson, E Andrea

    2014-01-01

    Aim This paper discusses the critical determinants of pressure ulcer development and proposes a new pressure ulcer conceptual framework. Background Recent work to develop and validate a new evidence-based pressure ulcer risk assessment framework was undertaken. This formed part of a Pressure UlceR Programme Of reSEarch (RP-PG-0407-10056), funded by the National Institute for Health Research. The foundation for the risk assessment component incorporated a systematic review and a consensus study that highlighted the need to propose a new conceptual framework. Design Discussion Paper. Data Sources The new conceptual framework links evidence from biomechanical, physiological and epidemiological evidence, through use of data from a systematic review (search conducted March 2010), a consensus study (conducted December 2010–2011) and an international expert group meeting (conducted December 2011). Implications for Nursing A new pressure ulcer conceptual framework incorporating key physiological and biomechanical components and their impact on internal strains, stresses and damage thresholds is proposed. Direct and key indirect causal factors suggested in a theoretical causal pathway are mapped to the physiological and biomechanical components of the framework. The new proposed conceptual framework provides the basis for understanding the critical determinants of pressure ulcer development and has the potential to influence risk assessment guidance and practice. It could also be used to underpin future research to explore the role of individual risk factors conceptually and operationally. Conclusion By integrating existing knowledge from epidemiological, physiological and biomechanical evidence, a theoretical causal pathway and new conceptual framework are proposed with potential implications for practice and research. PMID:24684197

  5. Exploration and implementation of ontology-based cultural relic knowledge map integration platform

    NASA Astrophysics Data System (ADS)

    Yang, Weiqiang; Dong, Yiqiang

    2018-05-01

    To help designers to better carry out creative design and improve the ability of searching traditional cultural relic information, the ontology-based knowledge map construction method was explored and an integrated platform for cultural relic knowledge map was developed. First of all, the construction method of the ontology of cultural relics was put forward, and the construction of the knowledge map of cultural relics was completed based on the constructed cultural relic otology. Then, a personalized semantic retrieval framework for creative design was proposed. Finally, the integrated platform of the knowledge map of cultural relics was designed and realized. The platform was divided into two parts. One was the foreground display system, which was used for designers to search and browse cultural relics. The other was the background management system, which was for cultural experts to manage cultural relics' knowledge. The research results showed that the platform designed could improve the retrieval ability of cultural relic information. To sum up, the platform can provide a good support for the designer's creative design.

  6. Stochastic optimization of broadband reflecting photonic structures.

    PubMed

    Estrada-Wiese, D; Del Río-Chanona, E A; Del Río, J A

    2018-01-19

    Photonic crystals (PCs) are built to control the propagation of light within their structure. These can be used for an assortment of applications where custom designed devices are of interest. Among them, one-dimensional PCs can be produced to achieve the reflection of specific and broad wavelength ranges. However, their design and fabrication are challenging due to the diversity of periodic arrangement and layer configuration that each different PC needs. In this study, we present a framework to design high reflecting PCs for any desired wavelength range. Our method combines three stochastic optimization algorithms (Random Search, Particle Swarm Optimization and Simulated Annealing) along with a reduced space-search methodology to obtain a custom and optimized PC configuration. The optimization procedure is evaluated through theoretical reflectance spectra calculated by using the Equispaced Thickness Method, which improves the simulations due to the consideration of incoherent light transmission. We prove the viability of our procedure by fabricating different reflecting PCs made of porous silicon and obtain good agreement between experiment and theory using a merit function. With this methodology, diverse reflecting PCs can be designed for any applications and fabricated with different materials.

  7. Bayesian inference for the genetic control of water deficit tolerance in spring wheat by stochastic search variable selection.

    PubMed

    Safari, Parviz; Danyali, Syyedeh Fatemeh; Rahimi, Mehdi

    2018-06-02

    Drought is the main abiotic stress seriously influencing wheat production. Information about the inheritance of drought tolerance is necessary to determine the most appropriate strategy to develop tolerant cultivars and populations. In this study, generation means analysis to identify the genetic effects controlling grain yield inheritance in water deficit and normal conditions was considered as a model selection problem in a Bayesian framework. Stochastic search variable selection (SSVS) was applied to identify the most important genetic effects and the best fitted models using different generations obtained from two crosses applying two water regimes in two growing seasons. The SSVS is used to evaluate the effect of each variable on the dependent variable via posterior variable inclusion probabilities. The model with the highest posterior probability is selected as the best model. In this study, the grain yield was controlled by the main effects (additive and non-additive effects) and epistatic. The results demonstrate that breeding methods such as recurrent selection and subsequent pedigree method and hybrid production can be useful to improve grain yield.

  8. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    PubMed

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  9. Constraint-Based Local Search for Constrained Optimum Paths Problems

    NASA Astrophysics Data System (ADS)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  10. Simultaneous beam sampling and aperture shape optimization for SPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. Conclusions: The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.« less

  11. A novel adaptive Cuckoo search for optimal query plan generation.

    PubMed

    Gomathi, Ramalingam; Sharmila, Dhandapani

    2014-01-01

    The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.

  12. [Acoustic information in snoring noises].

    PubMed

    Janott, C; Schuller, B; Heiser, C

    2017-02-01

    More than one third of all people snore regularly. Snoring is a common accompaniment of obstructive sleep apnea (OSA) and is often disruptive for the bed partner. This work gives an overview of the history of and state of research on acoustic analysis of snoring for classification of OSA severity, detection of obstructive events, measurement of annoyance, and identification of the sound excitation location. Based on these objectives, searches were conducted in the literature databases PubMed and IEEE Xplore. Publications dealing with the respective objectives according to title and abstract were selected from the search results. A total of 48 publications concerning the above objectives were considered. The limiting factor of many studies is the small number of subjects upon which the analyses are based. Recent research findings show promising results, such that acoustic analysis may find a place in the framework of sleep diagnostics, thus supplementing the recognized standard methods.

  13. Maternal nutrition and birth outcomes.

    PubMed

    Abu-Saad, Kathleen; Fraser, Drora

    2010-01-01

    In this review, the authors summarize current knowledge on maternal nutritional requirements during pregnancy, with a focus on the nutrients that have been most commonly investigated in association with birth outcomes. Data sourcing and extraction included searches of the primary resources establishing maternal nutrient requirements during pregnancy (e.g., Dietary Reference Intakes), and searches of Medline for "maternal nutrition"/[specific nutrient of interest] and "birth/pregnancy outcomes," focusing mainly on the less extensively reviewed evidence from observational studies of maternal dietary intake and birth outcomes. The authors used a conceptual framework which took both primary and secondary factors (e.g., baseline maternal nutritional status, socioeconomic status of the study populations, timing and methods of assessing maternal nutritional variables) into account when interpreting study findings. The authors conclude that maternal nutrition is a modifiable risk factor of public health importance that can be integrated into efforts to prevent adverse birth outcomes, particularly among economically developing/low-income populations.

  14. Urey onboard Exomars: Searching for life on Mars

    NASA Astrophysics Data System (ADS)

    Bada, J.; Ehrenfreund, P.; Grunthaner, F.; Sephton, M.; Urey Team

    2009-04-01

    Exomars is currently under development as the flagship mission of ESA's exploration program Aurora. A fundamental challenge ahead for the Exomars mission is to search for extinct and extant life. The Urey instrument (Mars Organic and Oxidant Detector) has been selected for the Pasteur payload and is considered a key instrument to achieve the mission's scientific objectives. Urey can detect organic compounds at unprecedented sensitivity of part-per-trillions in the Martian regolith. The instrument will target several key classes of organic molecules such as amino acids, nucleobases, amines and amino sugars and polycyclic aromatic hydrocrabon (PAHs) using state-of-the-art analytical methods. Chemoresistor oxidant sensors will provide complementary measurements by simultaneously evaluating the survival potential of organic compounds in the environment. The Urey instrument concept has tremendous future applications in Mars and Moon exploration in the framework of life detection and planetary protection.

  15. Systematic data ingratiation of clinical trial recruitment locations for geographic-based query and visualization.

    PubMed

    Luo, Jake; Chen, Weiheng; Wu, Min; Weng, Chunhua

    2017-12-01

    Prior studies of clinical trial planning indicate that it is crucial to search and screen recruitment sites before starting to enroll participants. However, currently there is no systematic method developed to support clinical investigators to search candidate recruitment sites according to their interested clinical trial factors. In this study, we aim at developing a new approach to integrating the location data of over one million heterogeneous recruitment sites that are stored in clinical trial documents. The integrated recruitment location data can be searched and visualized using a map-based information retrieval method. The method enables systematic search and analysis of recruitment sites across a large amount of clinical trials. The location data of more than 1.4 million recruitment sites of over 183,000 clinical trials was normalized and integrated using a geocoding method. The integrated data can be used to support geographic information retrieval of recruitment sites. Additionally, the information of over 6000 clinical trial target disease conditions and close to 4000 interventions was also integrated into the system and linked to the recruitment locations. Such data integration enabled the construction of a novel map-based query system. The system will allow clinical investigators to search and visualize candidate recruitment sites for clinical trials based on target conditions and interventions. The evaluation results showed that the coverage of the geographic location mapping for the 1.4 million recruitment sites was 99.8%. The evaluation of 200 randomly retrieved recruitment sites showed that the correctness of geographic information mapping was 96.5%. The recruitment intensities of the top 30 countries were also retrieved and analyzed. The data analysis results indicated that the recruitment intensity varied significantly across different countries and geographic areas. This study contributed a new data processing framework to extract and integrate the location data of heterogeneous recruitment sites from clinical trial documents. The developed system can support effective retrieval and analysis of potential recruitment sites using target clinical trial factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. DISCRN: A Distributed Storytelling Framework for Intelligence Analysis.

    PubMed

    Shukla, Manu; Dos Santos, Raimundo; Chen, Feng; Lu, Chang-Tien

    2017-09-01

    Storytelling connects entities (people, organizations) using their observed relationships to establish meaningful storylines. This can be extended to spatiotemporal storytelling that incorporates locations, time, and graph computations to enhance coherence and meaning. But when performed sequentially these computations become a bottleneck because the massive number of entities make space and time complexity untenable. This article presents DISCRN, or distributed spatiotemporal ConceptSearch-based storytelling, a distributed framework for performing spatiotemporal storytelling. The framework extracts entities from microblogs and event data, and links these entities using a novel ConceptSearch to derive storylines in a distributed fashion utilizing key-value pair paradigm. Performing these operations at scale allows deeper and broader analysis of storylines. The novel parallelization techniques speed up the generation and filtering of storylines on massive datasets. Experiments with microblog posts such as Twitter data and Global Database of Events, Language, and Tone events show the efficiency of the techniques in DISCRN.

  17. Understanding Unintended Consequences and Health Information Technology:

    PubMed Central

    Randell, R.; Borycki, E. M.

    2016-01-01

    Summary Objective No framework exists to identify and study unintended consequences (UICs) with a focus on organizational and social issues (OSIs). To address this shortcoming, we conducted a literature review to develop a framework for considering UICs and health information technology (HIT) from the perspective of OSIs. Methods A literature review was conducted for the period 2000-2015 using the search terms “unintended consequences” and “health information technology”. 67 papers were screened, of which 18 met inclusion criteria. Data extraction was focused on the types of technologies studied, types of UICs identified, and methods of data collection and analysis used. A thematic analysis was used to identify themes related to UICs. Results We identified two overarching themes. One was the definition and terminology of how people classify and discuss UICs. Second was OSIs and UICs. For the OSI theme, we also identified four sub-themes: process change and evolution, individual-collaborative interchange, context of use, and approaches to model, study, and understand UICs. Conclusions While there is a wide body of research on UICs, there is a lack of overall consensus on how they should be classified and reported, limiting our ability to understand the implications of UICs and how to manage them. More mixed-methods research and better proactive identification of UICs remain priorities. Our findings and framework of OSI considerations for studying UICs and HIT extend existing work on HIT and UICs by focusing on organizational and social issues. PMID:27830231

  18. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  19. Automatic segmentation of 4D cardiac MR images for extraction of ventricular chambers using a spatio-temporal approach

    NASA Astrophysics Data System (ADS)

    Atehortúa, Angélica; Zuluaga, Maria A.; Ourselin, Sébastien; Giraldo, Diana; Romero, Eduardo

    2016-03-01

    An accurate ventricular function quantification is important to support evaluation, diagnosis and prognosis of several cardiac pathologies. However, expert heart delineation, specifically for the right ventricle, is a time consuming task with high inter-and-intra observer variability. A fully automatic 3D+time heart segmentation framework is herein proposed for short-axis-cardiac MRI sequences. This approach estimates the heart using exclusively information from the sequence itself without tuning any parameters. The proposed framework uses a coarse-to-fine approach, which starts by localizing the heart via spatio-temporal analysis, followed by a segmentation of the basal heart that is then propagated to the apex by using a non-rigid-registration strategy. The obtained volume is then refined by estimating the ventricular muscle by locally searching a prior endocardium- pericardium intensity pattern. The proposed framework was applied to 48 patients datasets supplied by the organizers of the MICCAI 2012 Right Ventricle segmentation challenge. Results show the robustness, efficiency and competitiveness of the proposed method both in terms of accuracy and computational load.

  20. PORTR: Pre-Operative and Post-Recurrence Brain Tumor Registration

    PubMed Central

    Niethammer, Marc; Akbari, Hamed; Bilello, Michel; Davatzikos, Christos; Pohl, Kilian M.

    2014-01-01

    We propose a new method for deformable registration of pre-operative and post-recurrence brain MR scans of glioma patients. Performing this type of intra-subject registration is challenging as tumor, resection, recurrence, and edema cause large deformations, missing correspondences, and inconsistent intensity profiles between the scans. To address this challenging task, our method, called PORTR, explicitly accounts for pathological information. It segments tumor, resection cavity, and recurrence based on models specific to each scan. PORTR then uses the resulting maps to exclude pathological regions from the image-based correspondence term while simultaneously measuring the overlap between the aligned tumor and resection cavity. Embedded into a symmetric registration framework, we determine the optimal solution by taking advantage of both discrete and continuous search methods. We apply our method to scans of 24 glioma patients. Both quantitative and qualitative analysis of the results clearly show that our method is superior to other state-of-the-art approaches. PMID:24595340

  1. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    ERIC Educational Resources Information Center

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  2. A Systematic Review of Dropout from Organized Sport among Children and Youth

    ERIC Educational Resources Information Center

    Crane, Jeff; Temple, Viviene

    2015-01-01

    Leisure constraints theory was used as a framework to systematically review factors associated with dropout of organized sport among children and adolescents. Keyword searches for the population, context and construct of interest (i.e. dropout) identified articles from the entire contents of the following databases: Academic Search Complete, ERIC,…

  3. Secret Shoppers: The Stealth Applicant Search for Higher Education

    ERIC Educational Resources Information Center

    Dupaul, Stephanie; Harris, Michael S.

    2012-01-01

    Stealth applicants who do not flow through the traditional admission funnel now represent nearly one-third of the national applicant pool. This study employs a consumer behavior framework to examine the behaviors of stealth applicants at a private university. The findings provide a rich illustration of how stealth applicants search for college.…

  4. Model-independent assessment of current direct searches for spin-dependent dark matter.

    PubMed

    Giuliani, F

    2004-10-15

    I evaluate the current results of spin-dependent weakly interacting massive particle searches within a model-independent framework, showing the most restrictive limits to date derive from the combination of xenon and sodium iodide experiments. The extension of this analysis to the case of positive signal experiments is elaborated.

  5. Comparing health system performance assessment and management approaches in the Netherlands and Ontario, Canada

    PubMed Central

    Tawfik-Shukor, Ali R; Klazinga, Niek S; Arah, Onyebuchi A

    2007-01-01

    Background Given the proliferation and the growing complexity of performance measurement initiatives in many health systems, the Netherlands and Ontario, Canada expressed interests in cross-national comparisons in an effort to promote knowledge transfer and best practise. To support this cross-national learning, a study was undertaken to compare health system performance approaches in The Netherlands with Ontario, Canada. Methods We explored the performance assessment framework and system of each constituency, the embeddedness of performance data in management and policy processes, and the interrelationships between the frameworks. Methods used included analysing governmental strategic planning and policy documents, literature and internet searches, comparative descriptive tables, and schematics. Data collection and analysis took place in Ontario and The Netherlands. A workshop to validate and discuss the findings was conducted in Toronto, adding important insights to the study. Results Both Ontario and The Netherlands conceive health system performance within supportive frameworks. However they differ in their assessment approaches. Ontario's Scorecard links performance measurement with strategy, aimed at health system integration. The Dutch Health Care Performance Report (Zorgbalans) does not explicitly link performance with strategy, and focuses on the technical quality of healthcare by measuring dimensions of quality, access, and cost against healthcare needs. A backbone 'five diamond' framework maps both frameworks and articulates the interrelations and overlap between their goals, themes, dimensions and indicators. The workshop yielded more contextual insights and further validated the comparative values of each constituency's performance assessment system. Conclusion To compare the health system performance approaches between The Netherlands and Ontario, Canada, several important conceptual and contextual issues must be addressed, before even attempting any future content comparisons and benchmarking. Such issues would lend relevant interpretational credibility to international comparative assessments of the two health systems. PMID:17319947

  6. Sensitivity and predictive value of 15 PubMed search strategies to answer clinical questions rated against full systematic reviews.

    PubMed

    Agoritsas, Thomas; Merglen, Arnaud; Courvoisier, Delphine S; Combescure, Christophe; Garin, Nicolas; Perrier, Arnaud; Perneger, Thomas V

    2012-06-12

    Clinicians perform searches in PubMed daily, but retrieving relevant studies is challenging due to the rapid expansion of medical knowledge. Little is known about the performance of search strategies when they are applied to answer specific clinical questions. To compare the performance of 15 PubMed search strategies in retrieving relevant clinical trials on therapeutic interventions. We used Cochrane systematic reviews to identify relevant trials for 30 clinical questions. Search terms were extracted from the abstract using a predefined procedure based on the population, interventions, comparison, outcomes (PICO) framework and combined into queries. We tested 15 search strategies that varied in their query (PIC or PICO), use of PubMed's Clinical Queries therapeutic filters (broad or narrow), search limits, and PubMed links to related articles. We assessed sensitivity (recall) and positive predictive value (precision) of each strategy on the first 2 PubMed pages (40 articles) and on the complete search output. The performance of the search strategies varied widely according to the clinical question. Unfiltered searches and those using the broad filter of Clinical Queries produced large outputs and retrieved few relevant articles within the first 2 pages, resulting in a median sensitivity of only 10%-25%. In contrast, all searches using the narrow filter performed significantly better, with a median sensitivity of about 50% (all P < .001 compared with unfiltered queries) and positive predictive values of 20%-30% (P < .001 compared with unfiltered queries). This benefit was consistent for most clinical questions. Searches based on related articles retrieved about a third of the relevant studies. The Clinical Queries narrow filter, along with well-formulated queries based on the PICO framework, provided the greatest aid in retrieving relevant clinical trials within the 2 first PubMed pages. These results can help clinicians apply effective strategies to answer their questions at the point of care.

  7. Scientific Evaluation and Review of Claims in Health Care (SEaRCH): A Streamlined, Systematic, Phased Approach for Determining "What Works" in Healthcare.

    PubMed

    Jonas, Wayne B; Crawford, Cindy; Hilton, Lara; Elfenbaum, Pamela

    2017-01-01

    Answering the question of "what works" in healthcare can be complex and requires the careful design and sequential application of systematic methodologies. Over the last decade, the Samueli Institute has, along with multiple partners, developed a streamlined, systematic, phased approach to this process called the Scientific Evaluation and Review of Claims in Health Care (SEaRCH™). The SEaRCH process provides an approach for rigorously, efficiently, and transparently making evidence-based decisions about healthcare claims in research and practice with minimal bias. SEaRCH uses three methods combined in a coordinated fashion to help determine what works in healthcare. The first, the Claims Assessment Profile (CAP), seeks to clarify the healthcare claim and question, and its ability to be evaluated in the context of its delivery. The second method, the Rapid Evidence Assessment of the Literature (REAL © ), is a streamlined, systematic review process conducted to determine the quantity, quality, and strength of evidence and risk/benefit for the treatment. The third method involves the structured use of expert panels (EPs). There are several types of EPs, depending on the purpose and need. Together, these three methods-CAP, REAL, and EP-can be integrated into a strategic approach to help answer the question "what works in healthcare?" and what it means in a comprehensive way. SEaRCH is a systematic, rigorous approach for evaluating healthcare claims of therapies, practices, programs, or products in an efficient and stepwise fashion. It provides an iterative, protocol-driven process that is customized to the intervention, consumer, and context. Multiple communities, including those involved in health service and policy, can benefit from this organized framework, assuring that evidence-based principles determine which healthcare practices with the greatest promise are used for improving the public's health and wellness.

  8. Standard biological parts knowledgebase.

    PubMed

    Galdzicki, Michal; Rodriguez, Cesar; Chandran, Deepak; Sauro, Herbert M; Gennari, John H

    2011-02-24

    We have created the Knowledgebase of Standard Biological Parts (SBPkb) as a publically accessible Semantic Web resource for synthetic biology (sbolstandard.org). The SBPkb allows researchers to query and retrieve standard biological parts for research and use in synthetic biology. Its initial version includes all of the information about parts stored in the Registry of Standard Biological Parts (partsregistry.org). SBPkb transforms this information so that it is computable, using our semantic framework for synthetic biology parts. This framework, known as SBOL-semantic, was built as part of the Synthetic Biology Open Language (SBOL), a project of the Synthetic Biology Data Exchange Group. SBOL-semantic represents commonly used synthetic biology entities, and its purpose is to improve the distribution and exchange of descriptions of biological parts. In this paper, we describe the data, our methods for transformation to SBPkb, and finally, we demonstrate the value of our knowledgebase with a set of sample queries. We use RDF technology and SPARQL queries to retrieve candidate "promoter" parts that are known to be both negatively and positively regulated. This method provides new web based data access to perform searches for parts that are not currently possible.

  9. Multi-channel EEG-based sleep stage classification with joint collaborative representation and multiple kernel learning.

    PubMed

    Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui

    2015-10-30

    Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Computational search for rare-earth free hard-magnetic materials

    NASA Astrophysics Data System (ADS)

    Flores Livas, José A.; Sharma, Sangeeta; Dewhurst, John Kay; Gross, Eberhard; MagMat Team

    2015-03-01

    It is difficult to over state the importance of hard magnets for human life in modern times; they enter every walk of our life from medical equipments (NMR) to transport (trains, planes, cars, etc) to electronic appliances (for house hold use to computers). All the known hard magnets in use today contain rare-earth elements, extraction of which is expensive and environmentally harmful. Rare-earths are also instrumental in tipping the balance of world economy as most of them are mined in limited specific parts of the world. Hence it would be ideal to have similar characteristics as a hard magnet but without or at least with reduced amount of rare-earths. This is the main goal of our work: search for rare-earth-free magnets. To do so we employ a combination of density functional theory and crystal prediction methods. The quantities which define a hard magnet are magnetic anisotropy energy (MAE) and saturation magnetization (Ms), which are the quantities we maximize in search for an ideal magnet. In my talk I will present details of the computation search algorithm together with some potential newly discovered rare-earth free hard magnet. J.A.F.L. acknowledge financial support from EU's 7th Framework Marie-Curie scholarship program within the ``ExMaMa'' Project (329386).

  11. Conformational flexibility of two RNA trimers explored by computational tools and database search.

    PubMed

    Fadrná, Eva; Koca, Jaroslav

    2003-04-01

    Two RNA sequences, AAA and AUG, were studied by the conformational search program CICADA and by molecular dynamics (MD) in the framework of the AMBER force field, and also via thorough PDB database search. CICADA was used to provide detailed information about conformers and conformational interconversions on the energy surfaces of the above molecules. Several conformational families were found for both sequences. Analysis of the results shows differences, especially between the energy of the single families, and also in flexibility and concerted conformational movement. Therefore, several MD trajectories (altogether 16 ns) were run to obtain more details about both the stability of conformers belonging to different conformational families and about the dynamics of the two systems. Results show that the trajectories strongly depend on the starting structure. When the MD start from the global minimum found by CICADA, they provide a stable run, while MD starting from another conformational family generates a trajectory where several different conformational families are visited. The results obtained by theoretical methods are compared with the thorough database search data. It is concluded that all except for the highest energy conformational families found in theoretical result also appear in experimental data. Registry numbers: adenylyl-(3' --> 5')-adenylyl-(3' --> 5')-adenosine [917-44-2] adenylyl-(3' --> 5')-uridylyl-(3' --> 5')-guanosine [3494-35-7].

  12. Evolutionary tabu search strategies for the simultaneous registration of multiple atomic structures in cryo-EM reconstructions.

    PubMed

    Rusu, Mirabela; Birmanns, Stefan

    2010-04-01

    A structural characterization of multi-component cellular assemblies is essential to explain the mechanisms governing biological function. Macromolecular architectures may be revealed by integrating information collected from various biophysical sources - for instance, by interpreting low-resolution electron cryomicroscopy reconstructions in relation to the crystal structures of the constituent fragments. A simultaneous registration of multiple components is beneficial when building atomic models as it introduces additional spatial constraints to facilitate the native placement inside the map. The high-dimensional nature of such a search problem prevents the exhaustive exploration of all possible solutions. Here we introduce a novel method based on genetic algorithms, for the efficient exploration of the multi-body registration search space. The classic scheme of a genetic algorithm was enhanced with new genetic operations, tabu search and parallel computing strategies and validated on a benchmark of synthetic and experimental cryo-EM datasets. Even at a low level of detail, for example 35-40 A, the technique successfully registered multiple component biomolecules, measuring accuracies within one order of magnitude of the nominal resolutions of the maps. The algorithm was implemented using the Sculptor molecular modeling framework, which also provides a user-friendly graphical interface and enables an instantaneous, visual exploration of intermediate solutions. (c) 2009 Elsevier Inc. All rights reserved.

  13. [Clinical practice guidelines (II): searching and critical evaluation].

    PubMed

    Alonso, P; Bonfill, X

    2007-01-01

    Clinical practice guidelines have unique characteristics of the Internet era in which they are starting to be increasingly popular. The fact that they are often elaborated by governmental agencies and are not published in conventional journals means that they may not be accessible using the usual search methods employed for other types of scientific studies and documents (clinical trials, reviews, etc.). The Internet has become an essential tool for locating clinical practice guidelines, and meta-search engines, specific databases, directories, and elaborating institutions are of special importance. The relative lack of indexing of clinical practice guides means that Medline and Embase are not as useful in this context as in searching for original studies. With the aim of evaluating the validity, reproducibility, and reliability of clinical practice guidelines, a series of European institutions designed a tool to evaluate clinical practice guidelines at the end of the 1990s. This instrument, named AGREE, aims to offer a framework for the evaluation of the quality of clinical practice guidelines. It can also be useful in the design of new clinical practice guidelines as well as in the evaluation of the validity of guidelines to be updated or adapted. The AGREE instrument has become the reference for those that use guidelines, those that elaborate them, and for healthcare providers.

  14. Efficient hemodynamic event detection utilizing relational databases and wavelet analysis

    NASA Technical Reports Server (NTRS)

    Saeed, M.; Mark, R. G.

    2001-01-01

    Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.

  15. Protein Data Bank Japan (PDBj): updated user interfaces, resource description framework, analysis tools for large structures

    PubMed Central

    Kinjo, Akira R.; Bekker, Gert-Jan; Suzuki, Hirofumi; Tsuchiya, Yuko; Kawabata, Takeshi; Ikegawa, Yasuyo; Nakamura, Haruki

    2017-01-01

    The Protein Data Bank Japan (PDBj, http://pdbj.org), a member of the worldwide Protein Data Bank (wwPDB), accepts and processes the deposited data of experimentally determined macromolecular structures. While maintaining the archive in collaboration with other wwPDB partners, PDBj also provides a wide range of services and tools for analyzing structures and functions of proteins. We herein outline the updated web user interfaces together with RESTful web services and the backend relational database that support the former. To enhance the interoperability of the PDB data, we have previously developed PDB/RDF, PDB data in the Resource Description Framework (RDF) format, which is now a wwPDB standard called wwPDB/RDF. We have enhanced the connectivity of the wwPDB/RDF data by incorporating various external data resources. Services for searching, comparing and analyzing the ever-increasing large structures determined by hybrid methods are also described. PMID:27789697

  16. NanoParticle Ontology for Cancer Nanotechnology Research

    PubMed Central

    Thomas, Dennis G.; Pappu, Rohit V.; Baker, Nathan A.

    2010-01-01

    Data generated from cancer nanotechnology research are so diverse and large in volume that it is difficult to share and efficiently use them without informatics tools. In particular, ontologies that provide a unifying knowledge framework for annotating the data are required to facilitate the semantic integration, knowledge-based searching, unambiguous interpretation, mining and inferencing of the data using informatics methods. In this paper, we discuss the design and development of NanoParticle Ontology (NPO), which is developed within the framework of the Basic Formal Ontology (BFO), and implemented in the Ontology Web Language (OWL) using well-defined ontology design principles. The NPO was developed to represent knowledge underlying the preparation, chemical composition, and characterization of nanomaterials involved in cancer research. Public releases of the NPO are available through BioPortal website, maintained by the National Center for Biomedical Ontology. Mechanisms for editorial and governance processes are being developed for the maintenance, review, and growth of the NPO. PMID:20211274

  17. Methods for specifying the target difference in a randomised controlled trial: the Difference ELicitation in TriAls (DELTA) systematic review.

    PubMed

    Hislop, Jenni; Adewuyi, Temitope E; Vale, Luke D; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G; Briggs, Andrew H; Fayers, Peter; Ramsay, Craig R; Norrie, John D; Harvey, Ian M; Buckley, Brian; Cook, Jonathan A

    2014-05-01

    Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified-anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts.

  18. Searching Across the International Space Station Databases

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; McDermott, William J.; Smith, Ernest E.; Bell, David G.; Gurram, Mohana

    2007-01-01

    Data access in the enterprise generally requires us to combine data from different sources and different formats. It is advantageous thus to focus on the intersection of the knowledge across sources and domains; keeping irrelevant knowledge around only serves to make the integration more unwieldy and more complicated than necessary. A context search over multiple domain is proposed in this paper to use context sensitive queries to support disciplined manipulation of domain knowledge resources. The objective of a context search is to provide the capability for interrogating many domain knowledge resources, which are largely semantically disjoint. The search supports formally the tasks of selecting, combining, extending, specializing, and modifying components from a diverse set of domains. This paper demonstrates a new paradigm in composition of information for enterprise applications. In particular, it discusses an approach to achieving data integration across multiple sources, in a manner that does not require heavy investment in database and middleware maintenance. This lean approach to integration leads to cost-effectiveness and scalability of data integration with an underlying schemaless object-relational database management system. This highly scalable, information on demand system framework, called NX-Search, which is an implementation of an information system built on NETMARK. NETMARK is a flexible, high-throughput open database integration framework for managing, storing, and searching unstructured or semi-structured arbitrary XML and HTML used widely at the National Aeronautics Space Administration (NASA) and industry.

  19. Using a Positive Psychology and Family Framework to Understand Mexican American Adolescents' College-Going Beliefs

    ERIC Educational Resources Information Center

    Vela, Javier C.; Lenz, A. Stephen; Sparrow, Gregory Scott; Gonzalez, Stacey Lee

    2017-01-01

    Positive psychology is a useful framework to understand Mexican American adolescents' academic experiences. We used a quantitative, predictive design to explore how presence of meaning in life, search for meaning in life, subjective happiness, hope, and family importance influenced 131 Mexican American adolescents' college-going beliefs. We used…

  20. Towards Developing a Theoretical Framework for Measuring Public Sector Managers' Career Success

    ERIC Educational Resources Information Center

    Rasdi, Roziah Mohd; Ismail, Maimunah; Uli, Jegak; Noah, Sidek Mohd

    2009-01-01

    Purpose: The purpose of this paper is to develop a theoretical framework for measuring public sector managers' career success. Design/methodology/approach: The theoretical foundation used in this study is social cognitive career theory. To conduct a literature search, several keywords were identified, i.e. career success, objective and subjective…

  1. Physical Activity Research in Intellectual Disability: A Scoping Review Using the Behavioral Epidemiological Framework

    ERIC Educational Resources Information Center

    Pitchford, E. Andrew; Dixon-Ibarra, Alicia; Hauck, Janet L.

    2018-01-01

    Through a scoping review, the current state of physical activity research in people with intellectual disability was examined. A search of publications between 2000 and 2014 retrieved 362 articles that met inclusion criteria. Eligible studies were coded according to the Behavioral Epidemiological Framework. Of the articles identified, 48% examined…

  2. Identifying functional communities for use in forest planning and decisionmaking

    Treesearch

    Pamela J. Jakes; Thomas E. Fish

    1998-01-01

    Public land managers are searching for frameworks for organizing and displaying social information that will make it useful in forest management and decisionmaking. On the Chequamegon-Nicolet National Forests in Wisconsin, managers and researchers have used the concept of functional communities to develop such a framework. Functional communities define geographic areas...

  3. Growth Infusion: Embedding Staff Development in a Culture of Learning

    ERIC Educational Resources Information Center

    Bennett, Betty J.

    2017-01-01

    To address increasing accountability demands, instructional leaders must find ways to expand the current reality of faculty development to create a culture where continuous growth and learning are the standard for professional behavior. Growth Infusion is a framework for creating such a culture. The framework is a result of an extensive search for…

  4. A narrative review of research impact assessment models and methods.

    PubMed

    Milat, Andrew J; Bauman, Adrian E; Redman, Sally

    2015-03-18

    Research funding agencies continue to grapple with assessing research impact. Theoretical frameworks are useful tools for describing and understanding research impact. The purpose of this narrative literature review was to synthesize evidence that describes processes and conceptual models for assessing policy and practice impacts of public health research. The review involved keyword searches of electronic databases, including MEDLINE, CINAHL, PsycINFO, EBM Reviews, and Google Scholar in July/August 2013. Review search terms included 'research impact', 'policy and practice', 'intervention research', 'translational research', 'health promotion', and 'public health'. The review included theoretical and opinion pieces, case studies, descriptive studies, frameworks and systematic reviews describing processes, and conceptual models for assessing research impact. The review was conducted in two phases: initially, abstracts were retrieved and assessed against the review criteria followed by the retrieval and assessment of full papers against review criteria. Thirty one primary studies and one systematic review met the review criteria, with 88% of studies published since 2006. Studies comprised assessments of the impacts of a wide range of health-related research, including basic and biomedical research, clinical trials, health service research, as well as public health research. Six studies had an explicit focus on assessing impacts of health promotion or public health research and one had a specific focus on intervention research impact assessment. A total of 16 different impact assessment models were identified, with the 'payback model' the most frequently used conceptual framework. Typically, impacts were assessed across multiple dimensions using mixed methodologies, including publication and citation analysis, interviews with principal investigators, peer assessment, case studies, and document analysis. The vast majority of studies relied on principal investigator interviews and/or peer review to assess impacts, instead of interviewing policymakers and end-users of research. Research impact assessment is a new field of scientific endeavour and there are a growing number of conceptual frameworks applied to assess the impacts of research.

  5. Children's Environmental Health Indicators for Low- and Middle-Income Countries in Asia.

    PubMed

    Jung, Eun Mi; Kim, Eun Mee; Kang, Minah; Goldizen, Fiona; Gore, Fiona; Drisse, Marie Noel Brune; Ha, Eun Hee

    Given that low- and middle-income countries (LMICs) in Asia still have high child mortality rates, improved monitoring using children's environmental health indicators (CEHI) may help reduce preventable deaths by creating healthy environments. Thus, the aim of this study is to build a set of targeted CEHI that can be applied in LMICs in Asia through the CEHI initiative using a common conceptual framework. A systematic review was conducted to identify the most frequently used framework for developing CEHI. Due to the limited number of eligible records, a hand search of the reference lists and an extended search of Google Scholar were also performed. Based on our findings, we designed a set of targeted CEHI to address the children's environmental health situation in LMICs in Asia. The Delphi method was then adopted to assess the relevance, appropriateness, and feasibility of the targeted CEHI. The systematic review indicated that the Driving-Pressure-State-Exposure-Effect-Action framework and the Multiple-Exposures-Multiple-Effects model were the most common conceptual frameworks for developing CEHI. The Multiple-Exposures-Multiple-Effects model was adopted, given that its population of interest is children and its emphasis on the many-to-many relationship. Our review also showed that most of the previous studies covered upper-middle- or high-income countries. The Delphi results validated the targeted CEHI. The targeted CEHI were further specified by age group, gender, and place of residence (urban/rural) to enhance measurability. Improved monitoring systems of children's environmental health using the targeted CEHI may mitigate the data gap and enhance the quality of data in LMICs in Asia. Furthermore, critical information on the complex interaction between the environment and children's health using the CEHI will help establish a regional environmental children's health action plan, named "The Children's Environment and Health Action Plan for Asia." Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Rethinking research in the medical humanities: a scoping review and narrative synthesis of quantitative outcome studies.

    PubMed

    Dennhardt, Silke; Apramian, Tavis; Lingard, Lorelei; Torabi, Nazi; Arntfield, Shannon

    2016-03-01

    The rise of medical humanities teaching in medical education has introduced pressure to prove efficacy and utility. Review articles on the available evidence have been criticised for poor methodology and unwarranted conclusions. To support a more nuanced discussion of how the medical humanities work, we conducted a scoping review of quantitative studies of medical humanities teaching. Using a search strategy involving MEDLINE, EMBASE and ERIC, and hand searching, our scoping review located 11 045 articles that referred to the use of medical humanities teaching in medical education. Of these, 62 studies using quantitative evaluation methods were selected for review. Three iterations of analysis were performed: descriptive, conceptual, and discursive. Descriptive analysis revealed that the medical humanities as a whole cannot be easily systematised based on simple descriptive categories. Conceptual analysis supported the development of a conceptual framework in which the foci of the arts and humanities in medical education can be mapped alongside their related epistemic functions for teaching and learning. Within the framework, art functioned as expertise, as dialogue or as a means of expression and transformation. In the discursive analysis, we found three main ways in which the relationship between the arts and humanities and medicine was constructed as, respectively, intrinsic, additive and curative. This review offers a nuanced framework of how different types of medical humanities work. The epistemological assumptions and discursive positioning of medical humanities teaching frame the forms of outcomes research that are considered relevant to curriculum decision making, and shed light on why dominant review methodologies make some functions of medical humanities teaching visible and render others invisible. We recommend the use of this framework to improve the rigor and relevance of future explorations of the efficacy and utility of medical humanities teaching. © 2016 John Wiley & Sons Ltd.

  7. Person-centred rehabilitation: what exactly does it mean? Protocol for a scoping review with thematic analysis towards framing the concept and practice of person-centred rehabilitation

    PubMed Central

    Bright, Felicity; Kayes, Nicola; Cott, Cheryl A

    2016-01-01

    Introduction Person-centredness is a philosophy for organising and delivering healthcare based on patients’ needs, preferences and experiences. Although widely endorsed, the concept suffers from a lack of detail and clarification, in turn accounting for ambiguous implementation and outcomes. While a conceptual framework based on a systematic review defines person/patient-centred care components (Scholl et al, 2014), it applies across healthcare contexts and may not be sensitive to the nuances of the rehabilitation of adults with physical impairments. Accordingly, this study aims to build a conceptual framework, based on existing literature, of what person-centredness means in the rehabilitation of adults with physical impairments in the clinical encounter and broader health service delivery. Methods and analysis We will use a scoping review methodology. Searches on relevant databases will be conducted first, combining keywords for ‘rehabilitation’, ‘person-centered’ and associated terms (including patient preferences/experiences). Next, snowball searches (citation tracking, references lists) will be performed. Papers will be included if they fall within predefined selection categories (seen as most likely informative on elements pertaining to person-centred rehabilitation) and are written in English, regardless of design (conceptual, qualitative, quantitative). Two reviewers will independently screen titles and abstracts, followed by screening of the full text to determine inclusion. Experts will then be consulted to identify relevant missing papers. This can include elements other than the peer-reviewed literature (eg, book chapters, policy/legal papers). Finally, information that helps to build the concept and practice of person-centred rehabilitation will be abstracted independently by two reviewers and analysed by inductive thematic analysis to build the conceptual framework. Dissemination The resulting framework will aid clarification regarding person-centred rehabilitation, which in turn is expected to conceptually ground and inform its operationalisation (eg, measurement, implementation, improvement). Findings will be disseminated through local, national and international stakeholders, both at the clinical and service organisation levels. PMID:27436670

  8. A Case Study of Controlling Crossover in a Selection Hyper-heuristic Framework Using the Multidimensional Knapsack Problem.

    PubMed

    Drake, John H; Özcan, Ender; Burke, Edmund K

    2016-01-01

    Hyper-heuristics are high-level methodologies for solving complex problems that operate on a search space of heuristics. In a selection hyper-heuristic framework, a heuristic is chosen from an existing set of low-level heuristics and applied to the current solution to produce a new solution at each point in the search. The use of crossover low-level heuristics is possible in an increasing number of general-purpose hyper-heuristic tools such as HyFlex and Hyperion. However, little work has been undertaken to assess how best to utilise it. Since a single-point search hyper-heuristic operates on a single candidate solution, and two candidate solutions are required for crossover, a mechanism is required to control the choice of the other solution. The frameworks we propose maintain a list of potential solutions for use in crossover. We investigate the use of such lists at two conceptual levels. First, crossover is controlled at the hyper-heuristic level where no problem-specific information is required. Second, it is controlled at the problem domain level where problem-specific information is used to produce good-quality solutions to use in crossover. A number of selection hyper-heuristics are compared using these frameworks over three benchmark libraries with varying properties for an NP-hard optimisation problem: the multidimensional 0-1 knapsack problem. It is shown that allowing crossover to be managed at the domain level outperforms managing crossover at the hyper-heuristic level in this problem domain.

  9. Data warehouse governance programs in healthcare settings: a literature review and a call to action.

    PubMed

    Elliott, Thomas E; Holmes, John H; Davidson, Arthur J; La Chance, Pierre-Andre; Nelson, Andrew F; Steiner, John F

    2013-01-01

    Given the extensive data stored in healthcare data warehouses, data warehouse governance policies are needed to ensure data integrity and privacy. This review examines the current state of the data warehouse governance literature as it applies to healthcare data warehouses, identifies knowledge gaps, provides recommendations, and suggests approaches for further research. A comprehensive literature search using five data bases, journal article title-search, and citation searches was conducted between 1997 and 2012. Data warehouse governance documents from two healthcare systems in the USA were also reviewed. A modified version of nine components from the Data Governance Institute Framework for data warehouse governance guided the qualitative analysis. Fifteen articles were retrieved. Only three were related to healthcare settings, each of which addressed only one of the nine framework components. Of the remaining 12 articles, 10 addressed between one and seven framework components and the remainder addressed none. Each of the two data warehouse governance plans obtained from healthcare systems in the USA addressed a subset of the framework components, and between them they covered all nine. While published data warehouse governance policies are rare, the 15 articles and two healthcare organizational documents reviewed in this study may provide guidance to creating such policies. Additional research is needed in this area to ensure that data warehouse governance polices are feasible and effective. The gap between the development of data warehouses in healthcare settings and formal governance policies is substantial, as evidenced by the sparse literature in this domain.

  10. People, plants and health: a conceptual framework for assessing changes in medicinal plant consumption.

    PubMed

    Smith-Hall, Carsten; Larsen, Helle Overgaard; Pouliot, Mariève

    2012-11-13

    A large number of people in both developing and developed countries rely on medicinal plant products to maintain their health or treat illnesses. Available evidence suggests that medicinal plant consumption will remain stable or increase in the short to medium term. Knowledge on what factors determine medicinal plant consumption is, however, scattered across many disciplines, impeding, for example, systematic consideration of plant-based traditional medicine in national health care systems. The aim of the paper is to develop a conceptual framework for understanding medicinal plant consumption dynamics. Consumption is employed in the economic sense: use of medicinal plants by consumers or in the production of other goods. PubMed and Web of Knowledge (formerly Web of Science) were searched using a set of medicinal plant key terms (folk/peasant/rural/traditional/ethno/indigenous/CAM/herbal/botanical/phytotherapy); each search terms was combined with terms related to medicinal plant consumption dynamics (medicinal plants/health care/preference/trade/treatment seeking behavior/domestication/sustainability/conservation/urban/migration/climate change/policy/production systems). To eliminate studies not directly focused on medicinal plant consumption, searches were limited by a number of terms (chemistry/clinical/in vitro/antibacterial/dose/molecular/trial/efficacy/antimicrobial/alkaloid/bioactive/inhibit/antibody/purification/antioxidant/DNA/rat/aqueous). A total of 1940 references were identified; manual screening for relevance reduced this to 645 relevant documents. As the conceptual framework emerged inductively, additional targeted literature searches were undertaken on specific factors and link, bringing the final number of references to 737. The paper first defines the four main groups of medicinal plant users (1. Hunter-gatherers, 2. Farmers and pastoralists, 3. Urban and peri-urban people, 4. Entrepreneurs) and the three main types of benefits (consumer, producer, society-wide) derived from medicinal plants usage. Then a single unified conceptual framework for understanding the factors influencing medicinal plant consumption in the economic sense is proposed; the framework distinguishes four spatial levels of analysis (international, national, local, household) and identifies and describes 15 factors and their relationships. The framework provides a basis for increasing our conceptual understanding of medicinal plant consumption dynamics, allows a positioning of existing studies, and can serve to guide future research in the area. This would inform the formation of future health and natural resource management policies.

  11. System identification using Nuclear Norm & Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.

    2018-01-01

    In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.

  12. Strengthening health system to improve immunization for migrants in China.

    PubMed

    Fang, Hai; Yang, Li; Zhang, Huyang; Li, Chenyang; Wen, Liankui; Sun, Li; Hanson, Kara; Meng, Qingyue

    2017-07-01

    Immunization is the most cost-effective method to prevent and control vaccine-preventable diseases. Migrant population in China has been rising rapidly, and their immunization status is poor. China has tried various strategies to strengthen its health system, which has significantly improved immunization for migrants. This study applied a qualitative retrospective review method aiming to collect, analyze and synthesize health system strengthening experiences and practices about improving immunizations for migrants in China. A conceptual framework of Theory of Change was used to extract the searched literatures. 11 searched literatures and 4 national laws and policies related to immunizations for migrant children were carefully studied. China mainly employed 3 health system strengthening strategies to significantly improve immunization for migrant population: stop charging immunization fees or immunization insurance, manage immunization certificates well, and pay extra attentions on immunization for special children including migrant children. These health system strengthening strategies were very effective, and searched literatures show that up-to-date and age-appropriate immunization rates were significantly improved for migrant children. Economic development led to higher migrant population in China, but immunization for migrants, particularly migrant children, were poor. Fortunately various health system strengthening strategies were employed to improve immunization for migrants in China and they were rather successful. The experiences and lessons of immunization for migrant population in China might be helpful for other developing countries with a large number of migrant population.

  13. Measuring organizational and individual factors thought to influence the success of quality improvement in primary care: a systematic review of instruments

    PubMed Central

    2012-01-01

    Background Continuous quality improvement (CQI) methods are widely used in healthcare; however, the effectiveness of the methods is variable, and evidence about the extent to which contextual and other factors modify effects is limited. Investigating the relationship between these factors and CQI outcomes poses challenges for those evaluating CQI, among the most complex of which relate to the measurement of modifying factors. We aimed to provide guidance to support the selection of measurement instruments by systematically collating, categorising, and reviewing quantitative self-report instruments. Methods Data sources: We searched MEDLINE, PsycINFO, and Health and Psychosocial Instruments, reference lists of systematic reviews, and citations and references of the main report of instruments. Study selection: The scope of the review was determined by a conceptual framework developed to capture factors relevant to evaluating CQI in primary care (the InQuIRe framework). Papers reporting development or use of an instrument measuring a construct encompassed by the framework were included. Data extracted included instrument purpose; theoretical basis, constructs measured and definitions; development methods and assessment of measurement properties. Analysis and synthesis: We used qualitative analysis of instrument content and our initial framework to develop a taxonomy for summarising and comparing instruments. Instrument content was categorised using the taxonomy, illustrating coverage of the InQuIRe framework. Methods of development and evidence of measurement properties were reviewed for instruments with potential for use in primary care. Results We identified 186 potentially relevant instruments, 152 of which were analysed to develop the taxonomy. Eighty-four instruments measured constructs relevant to primary care, with content measuring CQI implementation and use (19 instruments), organizational context (51 instruments), and individual factors (21 instruments). Forty-one instruments were included for full review. Development methods were often pragmatic, rather than systematic and theory-based, and evidence supporting measurement properties was limited. Conclusions Many instruments are available for evaluating CQI, but most require further use and testing to establish their measurement properties. Further development and use of these measures in evaluations should increase the contribution made by individual studies to our understanding of CQI and enhance our ability to synthesise evidence for informing policy and practice. PMID:23241168

  14. Self-Referent Constructs and Medical Sociology: In Search of an Integrative Framework*

    PubMed Central

    Kaplan, Howard B.

    2010-01-01

    A theoretical framework centering on four classes of self-referent constructs is offered as a device for integrating the diverse areas constituting medical sociology. Guidance by this framework sensitizes the researcher to the occurrence of parallel processes in adjacent disciplines, facilitates recognition of the etiological significance of findings from other disciplines for explaining medical sociological phenomena, and encourages transactions between sociology and medical sociology whereby each informs and is informed by the other. PMID:17583268

  15. Topological Semimetals Studied by Ab Initio Calculations

    NASA Astrophysics Data System (ADS)

    Hirayama, Motoaki; Okugawa, Ryo; Murakami, Shuichi

    2018-04-01

    In topological semimetals such as Weyl, Dirac, and nodal-line semimetals, the band gap closes at points or along lines in k space which are not necessarily located at high-symmetry positions in the Brillouin zone. Therefore, it is not straightforward to find these topological semimetals by ab initio calculations because the band structure is usually calculated only along high-symmetry lines. In this paper, we review recent studies on topological semimetals by ab initio calculations. We explain theoretical frameworks which can be used for the search for topological semimetal materials, and some numerical methods used in the ab initio calculations.

  16. African Primary Care Research: Reviewing the literature

    PubMed Central

    Mash, Bob

    2014-01-01

    Abstract This is the second article in the series on African primary care research. The article focuses on how to search for relevant evidence in the published literature that can be used in the development of a research proposal. The article addresses the style of writing required and the nature of the arguments for the social and scientific value of the proposed study, as well as the use of literature in conceptual frameworks and in the methods. Finally, the article looks at how to keep track of the literature used and to reference it appropriately. PMID:26245433

  17. The behaviour change wheel: a new method for characterising and designing behaviour change interventions.

    PubMed

    Michie, Susan; van Stralen, Maartje M; West, Robert

    2011-04-23

    Improving the design and implementation of evidence-based practice depends on successful behaviour change interventions. This requires an appropriate method for characterising interventions and linking them to an analysis of the targeted behaviour. There exists a plethora of frameworks of behaviour change interventions, but it is not clear how well they serve this purpose. This paper evaluates these frameworks, and develops and evaluates a new framework aimed at overcoming their limitations. A systematic search of electronic databases and consultation with behaviour change experts were used to identify frameworks of behaviour change interventions. These were evaluated according to three criteria: comprehensiveness, coherence, and a clear link to an overarching model of behaviour. A new framework was developed to meet these criteria. The reliability with which it could be applied was examined in two domains of behaviour change: tobacco control and obesity. Nineteen frameworks were identified covering nine intervention functions and seven policy categories that could enable those interventions. None of the frameworks reviewed covered the full range of intervention functions or policies, and only a minority met the criteria of coherence or linkage to a model of behaviour. At the centre of a proposed new framework is a 'behaviour system' involving three essential conditions: capability, opportunity, and motivation (what we term the 'COM-B system'). This forms the hub of a 'behaviour change wheel' (BCW) around which are positioned the nine intervention functions aimed at addressing deficits in one or more of these conditions; around this are placed seven categories of policy that could enable those interventions to occur. The BCW was used reliably to characterise interventions within the English Department of Health's 2010 tobacco control strategy and the National Institute of Health and Clinical Excellence's guidance on reducing obesity. Interventions and policies to change behaviour can be usefully characterised by means of a BCW comprising: a 'behaviour system' at the hub, encircled by intervention functions and then by policy categories. Research is needed to establish how far the BCW can lead to more efficient design of effective interventions.

  18. Clustered-dot halftoning with direct binary search.

    PubMed

    Goyal, Puneet; Gupta, Madhur; Staelin, Carl; Fischer, Mani; Shacham, Omri; Allebach, Jan P

    2013-02-01

    In this paper, we present a new algorithm for aperiodic clustered-dot halftoning based on direct binary search (DBS). The DBS optimization framework has been modified for designing clustered-dot texture, by using filters with different sizes in the initialization and update steps of the algorithm. Following an intuitive explanation of how the clustered-dot texture results from this modified framework, we derive a closed-form cost metric which, when minimized, equivalently generates stochastic clustered-dot texture. An analysis of the cost metric and its influence on the texture quality is presented, which is followed by a modification to the cost metric to reduce computational cost and to make it more suitable for screen design.

  19. Technology-induced errors. The current use of frameworks and models from the biomedical and life sciences literatures.

    PubMed

    Borycki, E M; Kushniruk, A W; Bellwood, P; Brender, J

    2012-01-01

    The objective of this paper is to examine the extent, range and scope to which frameworks, models and theories dealing with technology-induced error have arisen in the biomedical and life sciences literature as indexed by Medline®. To better understand the state of work in the area of technology-induced error involving frameworks, models and theories, the authors conducted a search of Medline® using selected key words identified from seminal articles in this research area. Articles were reviewed and those pertaining to frameworks, models or theories dealing with technology-induced error were further reviewed by two researchers. All articles from Medline® from its inception to April of 2011 were searched using the above outlined strategy. 239 citations were returned. Each of the abstracts for the 239 citations were reviewed by two researchers. Eleven articles met the criteria based on abstract review. These 11 articles were downloaded for further in-depth review. The majority of the articles obtained describe frameworks and models with reference to theories developed in other literatures outside of healthcare. The papers were grouped into several areas. It was found that articles drew mainly from three literatures: 1) the human factors literature (including human-computer interaction and cognition), 2) the organizational behavior/sociotechnical literature, and 3) the software engineering literature. A variety of frameworks and models were found in the biomedical and life sciences literatures. These frameworks and models drew upon and extended frameworks, models and theoretical perspectives that have emerged in other literatures. These frameworks and models are informing an emerging line of research in health and biomedical informatics involving technology-induced errors in healthcare.

  20. A framework for assessing Health Economic Evaluation (HEE) quality appraisal instruments

    PubMed Central

    2012-01-01

    Background Health economic evaluations support the health care decision-making process by providing information on costs and consequences of health interventions. The quality of such studies is assessed by health economic evaluation (HEE) quality appraisal instruments. At present, there is no instrument for measuring and improving the quality of such HEE quality appraisal instruments. Therefore, the objectives of this study are to establish a framework for assessing the quality of HEE quality appraisal instruments to support and improve their quality, and to apply this framework to those HEE quality appraisal instruments which have been subject to more scrutiny than others, in order to test the framework and to demonstrate the shortcomings of existing HEE quality appraisal instruments. Methods To develop the quality assessment framework for HEE quality appraisal instruments, the experiences of using appraisal tools for clinical guidelines are used. Based on a deductive iterative process, clinical guideline appraisal instruments identified through literature search are reviewed, consolidated, and adapted to produce the final quality assessment framework for HEE quality appraisal instruments. Results The final quality assessment framework for HEE quality appraisal instruments consists of 36 items organized within 7 dimensions, each of which captures a specific domain of quality. Applying the quality assessment framework to four existing HEE quality appraisal instruments, it is found that these four quality appraisal instruments are of variable quality. Conclusions The framework described in this study should be regarded as a starting point for appraising the quality of HEE quality appraisal instruments. This framework can be used by HEE quality appraisal instrument producers to support and improve the quality and acceptance of existing and future HEE quality appraisal instruments. By applying this framework, users of HEE quality appraisal instruments can become aware of methodological deficiencies inherent in existing HEE quality appraisal instruments. These shortcomings of existing HEE quality appraisal instruments are illustrated by the pilot test. PMID:22894708

  1. Implementing communication and decision-making interventions directed at goals of care: a theory-led scoping review

    PubMed Central

    Cummings, Amanda; Lund, Susi; Campling, Natasha; May, Carl; Richardson, Alison; Myall, Michelle

    2017-01-01

    Objectives To identify the factors that promote and inhibit the implementation of interventions that improve communication and decision-making directed at goals of care in the event of acute clinical deterioration. Design and methods A scoping review was undertaken based on the methodological framework of Arksey and O’Malley for conducting this type of review. Searches were carried out in Medline and Cumulative Index to Nursing and Allied Health Literature (CINAHL) to identify peer-reviewed papers and in Google to identify grey literature. Searches were limited to those published in the English language from 2000 onwards. Inclusion and exclusion criteria were applied, and only papers that had a specific focus on implementation in practice were selected. Data extracted were treated as qualitative and subjected to directed content analysis. A theory-informed coding framework using Normalisation Process Theory (NPT) was applied to characterise and explain implementation processes. Results Searches identified 2619 citations, 43 of which met the inclusion criteria. Analysis generated six themes fundamental to successful implementation of goals of care interventions: (1) input into development; (2) key clinical proponents; (3) training and education; (4) intervention workability and functionality; (5) setting and context; and (6) perceived value and appraisal. Conclusions A broad and diverse literature focusing on implementation of goals of care interventions was identified. Our review recognised these interventions as both complex and contentious in nature, making their incorporation into routine clinical practice dependent on a number of factors. Implementing such interventions presents challenges at individual, organisational and systems levels, which make them difficult to introduce and embed. We have identified a series of factors that influence successful implementation and our analysis has distilled key learning points, conceptualised as a set of propositions, we consider relevant to implementing other complex and contentious interventions. PMID:28988176

  2. Privacy-Preserving Patient Similarity Learning in a Federated Environment: Development and Analysis

    PubMed Central

    Sun, Jimeng; Wang, Fei; Wang, Shuang; Jun, Chi-Hyuck; Jiang, Xiaoqian

    2018-01-01

    Background There is an urgent need for the development of global analytic frameworks that can perform analyses in a privacy-preserving federated environment across multiple institutions without privacy leakage. A few studies on the topic of federated medical analysis have been conducted recently with the focus on several algorithms. However, none of them have solved similar patient matching, which is useful for applications such as cohort construction for cross-institution observational studies, disease surveillance, and clinical trials recruitment. Objective The aim of this study was to present a privacy-preserving platform in a federated setting for patient similarity learning across institutions. Without sharing patient-level information, our model can find similar patients from one hospital to another. Methods We proposed a federated patient hashing framework and developed a novel algorithm to learn context-specific hash codes to represent patients across institutions. The similarities between patients can be efficiently computed using the resulting hash codes of corresponding patients. To avoid security attack from reverse engineering on the model, we applied homomorphic encryption to patient similarity search in a federated setting. Results We used sequential medical events extracted from the Multiparameter Intelligent Monitoring in Intensive Care-III database to evaluate the proposed algorithm in predicting the incidence of five diseases independently. Our algorithm achieved averaged area under the curves of 0.9154 and 0.8012 with balanced and imbalanced data, respectively, in κ-nearest neighbor with κ=3. We also confirmed privacy preservation in similarity search by using homomorphic encryption. Conclusions The proposed algorithm can help search similar patients across institutions effectively to support federated data analysis in a privacy-preserving manner. PMID:29653917

  3. OERScout Technology Framework: A Novel Approach to Open Educational Resources Search

    ERIC Educational Resources Information Center

    Abeywardena, Ishan Sudeera; Chan, Chee Seng; Tham, Choy Yoong

    2013-01-01

    The open educational resources (OER) movement has gained momentum in the past few years. With this new drive towards making knowledge open and accessible, a large number of OER repositories have been established and made available online throughout the world. However, the inability of existing search engines such as Google, Yahoo!, and Bing to…

  4. The Ontological Perspectives of the Semantic Web and the Metadata Harvesting Protocol: Applications of Metadata for Improving Web Search.

    ERIC Educational Resources Information Center

    Fast, Karl V.; Campbell, D. Grant

    2001-01-01

    Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…

  5. In Search of Optimal Cognitive Diagnostic Model(s) for ESL Grammar Test Data

    ERIC Educational Resources Information Center

    Yi, Yeon-Sook

    2017-01-01

    This study compares five cognitive diagnostic models in search of optimal one(s) for English as a Second Language grammar test data. Using a unified modeling framework that can represent specific models with proper constraints, the article first fit the full model (the log-linear cognitive diagnostic model, LCDM) and investigated which model…

  6. Examining a Model of Life Satisfaction among Unemployed Adults

    ERIC Educational Resources Information Center

    Duffy, Ryan D.; Bott, Elizabeth M.; Allan, Blake A.; Torrey, Carrie L.

    2013-01-01

    The present study examined a model of life satisfaction among a diverse sample of 184 adults who had been unemployed for an average of 10.60 months. Using the Lent (2004) model of life satisfaction as a framework, a model was tested with 5 hypothesized predictor variables: optimism, job search self-efficacy, job search support, job search…

  7. Factors associated with the implementation of community-based peer-led health promotion programs: A scoping review.

    PubMed

    Lorthios-Guilledroit, Agathe; Richard, Lucie; Filiatrault, Johanne

    2018-06-01

    Peer education is growing in popularity as a useful health promotion strategy. However, optimal conditions for implementing peer-led health promotion programs (HPPs) remain unclear. This scoping review aimed to describe factors that can influence implementation of peer-led HPPs targeting adult populations. Five databases were searched using the keywords "health promotion/prevention", "implementation", "peers", and related terms. Studies were included if they reported at least one factor associated with the implementation of community-based peer-led HPPs. Fifty-five studies were selected for the analysis. The method known as "best fit framework synthesis" was used to analyze the factors identified in the selected papers. Many factors included in existing implementation conceptual frameworks were deemed applicable to peer-led HPPs. However, other factors related to individuals, programs, and implementation context also emerged from the analysis. Based on this synthesis, an adapted theoretical framework was elaborated, grounded in a complex adaptive system perspective and specifying potential mechanisms through which factors may influence implementation of community-based peer-led HPPs. Further research is needed to test the theoretical framework against empirical data. Findings from this scoping review increase our knowledge of the optimal conditions for implementing peer-led HPPs and thereby maximizing the benefits of such programs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Simultaneous beam sampling and aperture shape optimization for SPORT.

    PubMed

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.

  9. Rapid Prototyping Technologies and their Applications in Prosthodontics, a Review of Literature

    PubMed Central

    Torabi, Kianoosh; Farjood, Ehsan; Hamedani, Shahram

    2015-01-01

    The early computer-aided design/computer-aided manufacturing (CAD/CAM) systems were relied exclusively on subtractive methods. In recent years, additive methods by employing rapid prototyping (RP) have progressed rapidly in various fields of dentistry as they have the potential to overcome known drawbacks of subtractive techniques such as fit problems. RP techniques have been exploited to build complex 3D models in medicine since the 1990s. RP has recently proposed successful applications in various dental fields, such as fabrication of implant surgical guides, frameworks for fixed and removable partial dentures, wax patterns for the dental prosthesis, zirconia prosthesis and molds for metal castings, and maxillofacial prosthesis and finally, complete dentures. This paper aimed to offer a comprehensive literature review of various RP methods, particularly in dentistry, that are expected to bring many improvements to the field. A search was made through MEDLINE database and Google scholar search engine. The keywords; ‘rapid prototyping’ and ‘dentistry’ were searched in title/abstract of publications; limited to 2003 to 2013, concerning past decade. The inclusion criterion was the technical researches that predominately included laboratory procedures. The exclusion criterion was meticulous clinical and excessive technical procedures. A total of 106 articles were retrieved, recited by authors and only 50 met the specified inclusion criteria for this review. Selected articles had used rapid prototyping techniques in various fields in dentistry through different techniques. This review depicted the different laboratory procedures employed in this method and confirmed that RP technique have been substantially feasible in dentistry. With advancement in various RP systems, it is possible to benefit from this technique in different dental practices, particularly in implementing dental prostheses for different applications. PMID:25759851

  10. Rapid Prototyping Technologies and their Applications in Prosthodontics, a Review of Literature.

    PubMed

    Torabi, Kianoosh; Farjood, Ehsan; Hamedani, Shahram

    2015-03-01

    The early computer-aided design/computer-aided manufacturing (CAD/CAM) systems were relied exclusively on subtractive methods. In recent years, additive methods by employing rapid prototyping (RP) have progressed rapidly in various fields of dentistry as they have the potential to overcome known drawbacks of subtractive techniques such as fit problems. RP techniques have been exploited to build complex 3D models in medicine since the 1990s. RP has recently proposed successful applications in various dental fields, such as fabrication of implant surgical guides, frameworks for fixed and removable partial dentures, wax patterns for the dental prosthesis, zirconia prosthesis and molds for metal castings, and maxillofacial prosthesis and finally, complete dentures. This paper aimed to offer a comprehensive literature review of various RP methods, particularly in dentistry, that are expected to bring many improvements to the field. A search was made through MEDLINE database and Google scholar search engine. The keywords; 'rapid prototyping' and 'dentistry' were searched in title/abstract of publications; limited to 2003 to 2013, concerning past decade. The inclusion criterion was the technical researches that predominately included laboratory procedures. The exclusion criterion was meticulous clinical and excessive technical procedures. A total of 106 articles were retrieved, recited by authors and only 50 met the specified inclusion criteria for this review. Selected articles had used rapid prototyping techniques in various fields in dentistry through different techniques. This review depicted the different laboratory procedures employed in this method and confirmed that RP technique have been substantially feasible in dentistry. With advancement in various RP systems, it is possible to benefit from this technique in different dental practices, particularly in implementing dental prostheses for different applications.

  11. A Narrative Review and Novel Framework for Application of Team-Based Learning in Graduate Medical Education.

    PubMed

    Poeppelman, Rachel Stork; Liebert, Cara A; Vegas, Daniel Brandt; Germann, Carl A; Volerman, Anna

    2016-10-01

    Team-based learning (TBL) promotes problem solving and teamwork, and has been applied as an instructional method in undergraduate medical education with purported benefits. Although TBL curricula have been implemented for residents, no published systematic reviews or guidelines exist for the development and use of TBL in graduate medical education (GME). To review TBL curricula in GME, identify gaps in the literature, and synthesize a framework to guide the development of TBL curricula at the GME level. We searched PubMed, MEDLINE, and ERIC databases from 1990 to 2014 for relevant articles. References were reviewed to identify additional studies. The inclusion criteria were peer-reviewed publications in English that described TBL curriculum implementation in GME. Data were systematically abstracted and reviewed for consensus. Based on included publications, a 4-element framework-system, residents, significance, and scaffolding-was developed to serve as a step-wise guide to planning a TBL curriculum in GME. Nine publications describing 7 unique TBL curricula in residency met inclusion criteria. Outcomes included feasibility, satisfaction, clinical behavior, teamwork, and knowledge application. TBL appears feasible in the GME environment, with learner reactions ranging from positive to neutral. Gaps in the literature occur within each of the 4 elements of the suggested framework, including: system , faculty preparation time and minimum length of effective TBL sessions; residents , impact of team heterogeneity and inconsistent attendance; significance , comparison to other instructional methods and outcomes measuring knowledge retention, knowledge application, and skill development; and scaffolding , factors that influence the completion of preparatory work.

  12. Simultaneous learning of instantaneous and time-delayed genetic interactions using novel information theoretic scoring technique

    PubMed Central

    2012-01-01

    Background Understanding gene interactions is a fundamental question in systems biology. Currently, modeling of gene regulations using the Bayesian Network (BN) formalism assumes that genes interact either instantaneously or with a certain amount of time delay. However in reality, biological regulations, both instantaneous and time-delayed, occur simultaneously. A framework that can detect and model both these two types of interactions simultaneously would represent gene regulatory networks more accurately. Results In this paper, we introduce a framework based on the Bayesian Network (BN) formalism that can represent both instantaneous and time-delayed interactions between genes simultaneously. A novel scoring metric having firm mathematical underpinnings is also proposed that, unlike other recent methods, can score both interactions concurrently and takes into account the reality that multiple regulators can regulate a gene jointly, rather than in an isolated pair-wise manner. Further, a gene regulatory network (GRN) inference method employing an evolutionary search that makes use of the framework and the scoring metric is also presented. Conclusion By taking into consideration the biological fact that both instantaneous and time-delayed regulations can occur among genes, our approach models gene interactions with greater accuracy. The proposed framework is efficient and can be used to infer gene networks having multiple orders of instantaneous and time-delayed regulations simultaneously. Experiments are carried out using three different synthetic networks (with three different mechanisms for generating synthetic data) as well as real life networks of Saccharomyces cerevisiae, E. coli and cyanobacteria gene expression data. The results show the effectiveness of our approach. PMID:22691450

  13. PyQuant: A Versatile Framework for Analysis of Quantitative Mass Spectrometry Data*

    PubMed Central

    Mitchell, Christopher J.; Kim, Min-Sik; Na, Chan Hyun; Pandey, Akhilesh

    2016-01-01

    Quantitative mass spectrometry data necessitates an analytical pipeline that captures the accuracy and comprehensiveness of the experiments. Currently, data analysis is often coupled to specific software packages, which restricts the analysis to a given workflow and precludes a more thorough characterization of the data by other complementary tools. To address this, we have developed PyQuant, a cross-platform mass spectrometry data quantification application that is compatible with existing frameworks and can be used as a stand-alone quantification tool. PyQuant supports most types of quantitative mass spectrometry data including SILAC, NeuCode, 15N, 13C, or 18O and chemical methods such as iTRAQ or TMT and provides the option of adding custom labeling strategies. In addition, PyQuant can perform specialized analyses such as quantifying isotopically labeled samples where the label has been metabolized into other amino acids and targeted quantification of selected ions independent of spectral assignment. PyQuant is capable of quantifying search results from popular proteomic frameworks such as MaxQuant, Proteome Discoverer, and the Trans-Proteomic Pipeline in addition to several standalone search engines. We have found that PyQuant routinely quantifies a greater proportion of spectral assignments, with increases ranging from 25–45% in this study. Finally, PyQuant is capable of complementing spectral assignments between replicates to quantify ions missed because of lack of MS/MS fragmentation or that were omitted because of issues such as spectra quality or false discovery rates. This results in an increase of biologically useful data available for interpretation. In summary, PyQuant is a flexible mass spectrometry data quantification platform that is capable of interfacing with a variety of existing formats and is highly customizable, which permits easy configuration for custom analysis. PMID:27231314

  14. Towards a More Efficient Detection of Earthquake Induced FAÇADE Damages Using Oblique Uav Imagery

    NASA Astrophysics Data System (ADS)

    Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G.

    2017-08-01

    Urban search and rescue (USaR) teams require a fast and thorough building damage assessment, to focus their rescue efforts accordingly. Unmanned aerial vehicles (UAV) are able to capture relevant data in a short time frame and survey otherwise inaccessible areas after a disaster, and have thus been identified as useful when coupled with RGB cameras for façade damage detection. Existing literature focuses on the extraction of 3D and/or image features as cues for damage. However, little attention has been given to the efficiency of the proposed methods which hinders its use in an urban search and rescue context. The framework proposed in this paper aims at a more efficient façade damage detection using UAV multi-view imagery. This was achieved directing all damage classification computations only to the image regions containing the façades, hence discarding the irrelevant areas of the acquired images and consequently reducing the time needed for such task. To accomplish this, a three-step approach is proposed: i) building extraction from the sparse point cloud computed from the nadir images collected in an initial flight; ii) use of the latter as proxy for façade location in the oblique images captured in subsequent flights, and iii) selection of the façade image regions to be fed to a damage classification routine. The results show that the proposed framework successfully reduces the extracted façade image regions to be assessed for damage 6 fold, hence increasing the efficiency of subsequent damage detection routines. The framework was tested on a set of UAV multi-view images over a neighborhood of the city of L'Aquila, Italy, affected in 2009 by an earthquake.

  15. Sex/Gender Differences and Autism: Setting the Scene for Future Research

    PubMed Central

    Lai, Meng-Chuan; Lombardo, Michael V.; Auyeung, Bonnie; Chakrabarti, Bhismadev; Baron-Cohen, Simon

    2015-01-01

    Objective The relationship between sex/gender differences and autism has attracted a variety of research ranging from clinical and neurobiological to etiological, stimulated by the male bias in autism prevalence. Findings are complex and do not always relate to each other in a straightforward manner. Distinct but interlinked questions on the relationship between sex/gender differences and autism remain underaddressed. To better understand the implications from existing research and to help design future studies, we propose a 4-level conceptual framework to clarify the embedded themes. Method We searched PubMed for publications before September 2014 using search terms “‘sex OR gender OR females’ AND autism.” A total of 1,906 articles were screened for relevance, along with publications identified via additional literature reviews, resulting in 329 articles that were reviewed. Results Level 1, “Nosological and diagnostic challenges,” concerns the question, “How should autism be defined and diagnosed in males and females?” Level 2, “Sex/gender-independent and sex/gender-dependent characteristics,” addresses the question, “What are the similarities and differences between males and females with autism?” Level 3, “General models of etiology: liability and threshold,” asks the question, “How is the liability for developing autism linked to sex/gender?” Level 4, “Specific etiological–developmental mechanisms,” focuses on the question, “What etiological–developmental mechanisms of autism are implicated by sex/gender and/or sexual/gender differentiation?” Conclusions Using this conceptual framework, findings can be more clearly summarized, and the implications of the links between findings from different levels can become clearer. Based on this 4-level framework, we suggest future research directions, methodology, and specific topics in sex/gender differences and autism. PMID:25524786

  16. Challenges in systematic reviews: synthesis of topics related to the delivery, organization, and financing of health care.

    PubMed

    Bravata, Dena M; McDonald, Kathryn M; Shojania, Kaveh G; Sundaram, Vandana; Owens, Douglas K

    2005-06-21

    Some important health policy topics, such as those related to the delivery, organization, and financing of health care, present substantial challenges to established methods for evidence synthesis. For example, such reviews may ask: What is the effect of for-profit versus not-for-profit delivery of care on patient outcomes? Or, which strategies are the most effective for promoting preventive care? This paper describes innovative methods for synthesizing evidence related to the delivery, organization, and financing of health care. We found 13 systematic reviews on these topics that described novel methodologic approaches. Several of these syntheses used 3 approaches: conceptual frameworks to inform problem formulation, systematic searches that included nontraditional literature sources, and hybrid synthesis methods that included simulations to address key gaps in the literature. As the primary literature on these topics expands, so will opportunities to develop additional novel methods for performing high-quality comprehensive syntheses.

  17. An extended abstract: A heuristic repair method for constraint-satisfaction and scheduling problems

    NASA Technical Reports Server (NTRS)

    Minton, Steven; Johnston, Mark D.; Philips, Andrew B.; Laird, Philip

    1992-01-01

    The work described in this paper was inspired by a surprisingly effective neural network developed for scheduling astronomical observations on the Hubble Space Telescope. Our heuristic constraint satisfaction problem (CSP) method was distilled from an analysis of the network. In the process of carrying out the analysis, we discovered that the effectiveness of the network has little to do with its connectionist implementation. Furthermore, the ideas employed in the network can be implemented very efficiently within a symbolic CSP framework. The symbolic implementation is extremely simple. It also has the advantage that several different search strategies can be employed, although we have found that hill-climbing methods are particularly well-suited for the applications that we have investigated. We begin the paper with a brief review of the neural network. Following this, we describe our symbolic method for heuristic repair.

  18. A concept ideation framework for medical device design.

    PubMed

    Hagedorn, Thomas J; Grosse, Ian R; Krishnamurty, Sundar

    2015-06-01

    Medical device design is a challenging process, often requiring collaboration between medical and engineering domain experts. This collaboration can be best institutionalized through systematic knowledge transfer between the two domains coupled with effective knowledge management throughout the design innovation process. Toward this goal, we present the development of a semantic framework for medical device design that unifies a large medical ontology with detailed engineering functional models along with the repository of design innovation information contained in the US Patent Database. As part of our development, existing medical, engineering, and patent document ontologies were modified and interlinked to create a comprehensive medical device innovation and design tool with appropriate properties and semantic relations to facilitate knowledge capture, enrich existing knowledge, and enable effective knowledge reuse for different scenarios. The result is a Concept Ideation Framework for Medical Device Design (CIFMeDD). Key features of the resulting framework include function-based searching and automated inter-domain reasoning to uniquely enable identification of functionally similar procedures, tools, and inventions from multiple domains based on simple semantic searches. The significance and usefulness of the resulting framework for aiding in conceptual design and innovation in the medical realm are explored via two case studies examining medical device design problems. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. A Framework for the Design of Computer-Assisted Simulation Training for Complex Police Situations

    ERIC Educational Resources Information Center

    Söderström, Tor; Åström, Jan; Anderson, Greg; Bowles, Ron

    2014-01-01

    Purpose: The purpose of this paper is to report progress concerning the design of a computer-assisted simulation training (CAST) platform for developing decision-making skills in police students. The overarching aim is to outline a theoretical framework for the design of CAST to facilitate police students' development of search techniques in…

  20. In Search of an Identity: Air Force Core Competencies

    DTIC Science & Technology

    1997-06-01

    for connecting core competencies to both inside and outside the service . Core competencies have become a decision making framework for the Air Force...Proposed Intra– Service Relationship ................................................................. 76 Figure 2. Proposed Inter- service and Joint...connecting core competencies to both inside and outside the service . Core competencies have become a decision making framework for the Air Force. They

  1. A Method for Search Engine Selection using Thesaurus for Selective Meta-Search Engine

    NASA Astrophysics Data System (ADS)

    Goto, Shoji; Ozono, Tadachika; Shintani, Toramatsu

    In this paper, we propose a new method for selecting search engines on WWW for selective meta-search engine. In selective meta-search engine, a method is needed that would enable selecting appropriate search engines for users' queries. Most existing methods use statistical data such as document frequency. These methods may select inappropriate search engines if a query contains polysemous words. In this paper, we describe an search engine selection method based on thesaurus. In our method, a thesaurus is constructed from documents in a search engine and is used as a source description of the search engine. The form of a particular thesaurus depends on the documents used for its construction. Our method enables search engine selection by considering relationship between terms and overcomes the problems caused by polysemous words. Further, our method does not have a centralized broker maintaining data, such as document frequency for all search engines. As a result, it is easy to add a new search engine, and meta-search engines become more scalable with our method compared to other existing methods.

  2. Fast and Flexible Multivariate Time Series Subsequence Search

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Oza, Nikunj C.; Zhu, Qiang; Srivastava, Ashok N.

    2010-01-01

    Multivariate Time-Series (MTS) are ubiquitous, and are generated in areas as disparate as sensor recordings in aerospace systems, music and video streams, medical monitoring, and financial systems. Domain experts are often interested in searching for interesting multivariate patterns from these MTS databases which often contain several gigabytes of data. Surprisingly, research on MTS search is very limited. Most of the existing work only supports queries with the same length of data, or queries on a fixed set of variables. In this paper, we propose an efficient and flexible subsequence search framework for massive MTS databases, that, for the first time, enables querying on any subset of variables with arbitrary time delays between them. We propose two algorithms to solve this problem (1) a List Based Search (LBS) algorithm which uses sorted lists for indexing, and (2) a R*-tree Based Search (RBS) which uses Minimum Bounding Rectangles (MBR) to organize the subsequences. Both algorithms guarantee that all matching patterns within the specified thresholds will be returned (no false dismissals). The very few false alarms can be removed by a post-processing step. Since our framework is also capable of Univariate Time-Series (UTS) subsequence search, we first demonstrate the efficiency of our algorithms on several UTS datasets previously used in the literature. We follow this up with experiments using two large MTS databases from the aviation domain, each containing several millions of observations. Both these tests show that our algorithms have very high prune rates (>99%) thus needing actual disk access for only less than 1% of the observations. To the best of our knowledge, MTS subsequence search has never been attempted on datasets of the size we have used in this paper.

  3. Scoring-and-unfolding trimmed tree assembler: concepts, constructs and comparisons.

    PubMed

    Narzisi, Giuseppe; Mishra, Bud

    2011-01-15

    Mired by its connection to a well-known -complete combinatorial optimization problem-namely, the Shortest Common Superstring Problem (SCSP)-historically, the whole-genome sequence assembly (WGSA) problem has been assumed to be amenable only to greedy and heuristic methods. By placing efficiency as their first priority, these methods opted to rely only on local searches, and are thus inherently approximate, ambiguous or error prone, especially, for genomes with complex structures. Furthermore, since choice of the best heuristics depended critically on the properties of (e.g. errors in) the input data and the available long range information, these approaches hindered designing an error free WGSA pipeline. We dispense with the idea of limiting the solutions to just the approximated ones, and instead favor an approach that could potentially lead to an exhaustive (exponential-time) search of all possible layouts. Its computational complexity thus must be tamed through a constrained search (Branch-and-Bound) and quick identification and pruning of implausible overlays. For his purpose, such a method necessarily relies on a set of score functions (oracles) that can combine different structural properties (e.g. transitivity, coverage, physical maps, etc.). We give a detailed description of this novel assembly framework, referred to as Scoring-and-Unfolding Trimmed Tree Assembler (SUTTA), and present experimental results on several bacterial genomes using next-generation sequencing technology data. We also report experimental evidence that the assembly quality strongly depends on the choice of the minimum overlap parameter k. SUTTA's binaries are freely available to non-profit institutions for research and educational purposes at http://www.bioinformatics.nyu.edu.

  4. Proteomics Versus Clinical Data and Stochastic Local Search Based Feature Selection for Acute Myeloid Leukemia Patients' Classification.

    PubMed

    Chebouba, Lokmane; Boughaci, Dalila; Guziolowski, Carito

    2018-06-04

    The use of data issued from high throughput technologies in drug target problems is widely widespread during the last decades. This study proposes a meta-heuristic framework using stochastic local search (SLS) combined with random forest (RF) where the aim is to specify the most important genes and proteins leading to the best classification of Acute Myeloid Leukemia (AML) patients. First we use a stochastic local search meta-heuristic as a feature selection technique to select the most significant proteins to be used in the classification task step. Then we apply RF to classify new patients into their corresponding classes. The evaluation technique is to run the RF classifier on the training data to get a model. Then, we apply this model on the test data to find the appropriate class. We use as metrics the balanced accuracy (BAC) and the area under the receiver operating characteristic curve (AUROC) to measure the performance of our model. The proposed method is evaluated on the dataset issued from DREAM 9 challenge. The comparison is done with a pure random forest (without feature selection), and with the two best ranked results of the DREAM 9 challenge. We used three types of data: only clinical data, only proteomics data, and finally clinical and proteomics data combined. The numerical results show that the highest scores are obtained when using clinical data alone, and the lowest is obtained when using proteomics data alone. Further, our method succeeds in finding promising results compared to the methods presented in the DREAM challenge.

  5. Automatic segmentation of mitochondria in EM data using pairwise affinity factorization and graph-based contour searching.

    PubMed

    Ghita, Ovidiu; Dietlmeier, Julia; Whelan, Paul F

    2014-10-01

    In this paper, we investigate the segmentation of closed contours in subcellular data using a framework that primarily combines the pairwise affinity grouping principles with a graph partitioning contour searching approach. One salient problem that precluded the application of these methods to large scale segmentation problems is the onerous computational complexity required to generate comprehensive representations that include all pairwise relationships between all pixels in the input data. To compensate for this problem, a practical solution is to reduce the complexity of the input data by applying an over-segmentation technique prior to the application of the computationally demanding strands of the segmentation process. This approach opens the opportunity to build specific shape and intensity models that can be successfully employed to extract the salient structures in the input image which are further processed to identify the cycles in an undirected graph. The proposed framework has been applied to the segmentation of mitochondria membranes in electron microscopy data which are characterized by low contrast and low signal-to-noise ratio. The algorithm has been quantitatively evaluated using two datasets where the segmentation results have been compared with the corresponding manual annotations. The performance of the proposed algorithm has been measured using standard metrics, such as precision and recall, and the experimental results indicate a high level of segmentation accuracy.

  6. A review of the success and failure characteristics of resin-bonded bridges.

    PubMed

    Miettinen, M; Millar, B J

    2013-07-01

    This literature review was designed to assess and compare the success rates and modes of failure of metal-framed, fibre-reinforced composite and all-ceramic resin-bonded bridges. A Medline search (Ovid), supplemented by hand searching, was conducted to identify prospective and retrospective cohort studies on different resin-bonded bridges within the last 16 years. A total of 49 studies met the pre-set inclusion criteria. Success rates of 25 studies on metal-framed, 17 studies on fibre-reinforced composite and 7 studies on all-ceramic resin-bonded bridges were analysed and characteristics of failures were identified. The analysis of the studies indicated an estimation of annual failure rates per year to be 4.6% (±1.3%, 95% CI) for metal-framed, 4.1% (±2.1%, 95% CI) for fibre-reinforced and 11.7% (±1.8%, 95% CI) for all-ceramic resin-bonded bridges. The most frequent complications were: debonding for metal-framed, resin-bonded bridges (93% of all failures); delamination of the composite veneering material for the fibre-reinforced bridges (41%) and fracture of the framework for the all-ceramic bridges (57%). All types of resin-bonded bridges provide an effective short- to medium-term option, with all-ceramic performing least well and having the least favourable mode of failure. The methods of failures were different for different bridges with metal frameworks performing the best over time.

  7. Rapid development of Proteomic applications with the AIBench framework.

    PubMed

    López-Fernández, Hugo; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Méndez Reboredo, José R; Santos, Hugo M; Carreira, Ricardo J; Capelo-Martínez, José L; Fdez-Riverola, Florentino

    2011-09-15

    In this paper we present two case studies of Proteomics applications development using the AIBench framework, a Java desktop application framework mainly focused in scientific software development. The applications presented in this work are Decision Peptide-Driven, for rapid and accurate protein quantification, and Bacterial Identification, for Tuberculosis biomarker search and diagnosis. Both tools work with mass spectrometry data, specifically with MALDI-TOF spectra, minimizing the time required to process and analyze the experimental data. Copyright 2011 The Author(s). Published by Journal of Integrative Bioinformatics.

  8. Optimal Multiple Surface Segmentation With Shape and Context Priors

    PubMed Central

    Bai, Junjie; Garvin, Mona K.; Sonka, Milan; Buatti, John M.; Wu, Xiaodong

    2014-01-01

    Segmentation of multiple surfaces in medical images is a challenging problem, further complicated by the frequent presence of weak boundary evidence, large object deformations, and mutual influence between adjacent objects. This paper reports a novel approach to multi-object segmentation that incorporates both shape and context prior knowledge in a 3-D graph-theoretic framework to help overcome the stated challenges. We employ an arc-based graph representation to incorporate a wide spectrum of prior information through pair-wise energy terms. In particular, a shape-prior term is used to penalize local shape changes and a context-prior term is used to penalize local surface-distance changes from a model of the expected shape and surface distances, respectively. The globally optimal solution for multiple surfaces is obtained by computing a maximum flow in a low-order polynomial time. The proposed method was validated on intraretinal layer segmentation of optical coherence tomography images and demonstrated statistically significant improvement of segmentation accuracy compared to our earlier graph-search method that was not utilizing shape and context priors. The mean unsigned surface positioning errors obtained by the conventional graph-search approach (6.30 ± 1.58 μm) was improved to 5.14 ± 0.99 μm when employing our new method with shape and context priors. PMID:23193309

  9. A high-performance seizure detection algorithm based on Discrete Wavelet Transform (DWT) and EEG

    PubMed Central

    Chen, Duo; Wan, Suiren; Xiang, Jing; Bao, Forrest Sheng

    2017-01-01

    In the past decade, Discrete Wavelet Transform (DWT), a powerful time-frequency tool, has been widely used in computer-aided signal analysis of epileptic electroencephalography (EEG), such as the detection of seizures. One of the important hurdles in the applications of DWT is the settings of DWT, which are chosen empirically or arbitrarily in previous works. The objective of this study aimed to develop a framework for automatically searching the optimal DWT settings to improve accuracy and to reduce computational cost of seizure detection. To address this, we developed a method to decompose EEG data into 7 commonly used wavelet families, to the maximum theoretical level of each mother wavelet. Wavelets and decomposition levels providing the highest accuracy in each wavelet family were then searched in an exhaustive selection of frequency bands, which showed optimal accuracy and low computational cost. The selection of frequency bands and features removed approximately 40% of redundancies. The developed algorithm achieved promising performance on two well-tested EEG datasets (accuracy >90% for both datasets). The experimental results of the developed method have demonstrated that the settings of DWT affect its performance on seizure detection substantially. Compared with existing seizure detection methods based on wavelet, the new approach is more accurate and transferable among datasets. PMID:28278203

  10. FGWAS: Functional genome wide association analysis.

    PubMed

    Huang, Chao; Thompson, Paul; Wang, Yalin; Yu, Yang; Zhang, Jingwen; Kong, Dehan; Colen, Rivka R; Knickmeyer, Rebecca C; Zhu, Hongtu

    2017-10-01

    Functional phenotypes (e.g., subcortical surface representation), which commonly arise in imaging genetic studies, have been used to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. However, existing statistical methods largely ignore the functional features (e.g., functional smoothness and correlation). The aim of this paper is to develop a functional genome-wide association analysis (FGWAS) framework to efficiently carry out whole-genome analyses of functional phenotypes. FGWAS consists of three components: a multivariate varying coefficient model, a global sure independence screening procedure, and a test procedure. Compared with the standard multivariate regression model, the multivariate varying coefficient model explicitly models the functional features of functional phenotypes through the integration of smooth coefficient functions and functional principal component analysis. Statistically, compared with existing methods for genome-wide association studies (GWAS), FGWAS can substantially boost the detection power for discovering important genetic variants influencing brain structure and function. Simulation studies show that FGWAS outperforms existing GWAS methods for searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. We have successfully applied FGWAS to large-scale analysis of data from the Alzheimer's Disease Neuroimaging Initiative for 708 subjects, 30,000 vertices on the left and right hippocampal surfaces, and 501,584 SNPs. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Topology optimisation of micro fluidic mixers considering fluid-structure interactions with a coupled Lattice Boltzmann algorithm

    NASA Astrophysics Data System (ADS)

    Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.

    2017-11-01

    Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.

  12. Less-simplified models of dark matter for direct detection and the LHC

    NASA Astrophysics Data System (ADS)

    Choudhury, Arghya; Kowalska, Kamila; Roszkowski, Leszek; Sessolo, Enrico Maria; Williams, Andrew J.

    2016-04-01

    We construct models of dark matter with suppressed spin-independent scattering cross section utilizing the existing simplified model framework. Even simple combinations of simplified models can exhibit interference effects that cause the tree level contribution to the scattering cross section to vanish, thus demonstrating that direct detection limits on simplified models are not robust when embedded in a more complicated and realistic framework. In general for fermionic WIMP masses ≳ 10 GeV direct detection limits on the spin-independent scattering cross section are much stronger than those coming from the LHC. However these model combinations, which we call less-simplified models, represent situations where LHC searches become more competitive than direct detection experiments even for moderate dark matter mass. We show that a complementary use of several searches at the LHC can strongly constrain the direct detection blind spots by setting limits on the coupling constants and mediators' mass. We derive the strongest limits for combinations of vector + scalar, vector + "squark", and "squark" + scalar mediator, and present the corresponding projections for the LHC 14 TeV for a number of searches: mono-jet, jets + missing energy, and searches for heavy vector resonances.

  13. Issues in the design of a pilot concept-based query interface for the neuroinformatics information framework.

    PubMed

    Marenco, Luis; Li, Yuli; Martone, Maryann E; Sternberg, Paul W; Shepherd, Gordon M; Miller, Perry L

    2008-09-01

    This paper describes a pilot query interface that has been constructed to help us explore a "concept-based" approach for searching the Neuroscience Information Framework (NIF). The query interface is concept-based in the sense that the search terms submitted through the interface are selected from a standardized vocabulary of terms (concepts) that are structured in the form of an ontology. The NIF contains three primary resources: the NIF Resource Registry, the NIF Document Archive, and the NIF Database Mediator. These NIF resources are very different in their nature and therefore pose challenges when designing a single interface from which searches can be automatically launched against all three resources simultaneously. The paper first discusses briefly several background issues involving the use of standardized biomedical vocabularies in biomedical information retrieval, and then presents a detailed example that illustrates how the pilot concept-based query interface operates. The paper concludes by discussing certain lessons learned in the development of the current version of the interface.

  14. Issues in the Design of a Pilot Concept-Based Query Interface for the Neuroinformatics Information Framework

    PubMed Central

    Li, Yuli; Martone, Maryann E.; Sternberg, Paul W.; Shepherd, Gordon M.; Miller, Perry L.

    2009-01-01

    This paper describes a pilot query interface that has been constructed to help us explore a “concept-based” approach for searching the Neuroscience Information Framework (NIF). The query interface is concept-based in the sense that the search terms submitted through the interface are selected from a standardized vocabulary of terms (concepts) that are structured in the form of an ontology. The NIF contains three primary resources: the NIF Resource Registry, the NIF Document Archive, and the NIF Database Mediator. These NIF resources are very different in their nature and therefore pose challenges when designing a single interface from which searches can be automatically launched against all three resources simultaneously. The paper first discusses briefly several background issues involving the use of standardized biomedical vocabularies in biomedical information retrieval, and then presents a detailed example that illustrates how the pilot concept-based query interface operates. The paper concludes by discussing certain lessons learned in the development of the current version of the interface. PMID:18953674

  15. Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.

    PubMed

    Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil

    2018-01-25

    Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.

  16. Learning to rank using user clicks and visual features for image retrieval.

    PubMed

    Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong

    2015-04-01

    The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.

  17. Costing 'healthy' food baskets in Australia - a systematic review of food price and affordability monitoring tools, protocols and methods.

    PubMed

    Lewis, Meron; Lee, Amanda

    2016-11-01

    To undertake a systematic review to determine similarities and differences in metrics and results between recently and/or currently used tools, protocols and methods for monitoring Australian healthy food prices and affordability. Electronic databases of peer-reviewed literature and online grey literature were systematically searched using the PRISMA approach for articles and reports relating to healthy food and diet price assessment tools, protocols, methods and results that utilised retail pricing. National, state, regional and local areas of Australia from 1995 to 2015. Assessment tools, protocols and methods to measure the price of 'healthy' foods and diets. The search identified fifty-nine discrete surveys of 'healthy' food pricing incorporating six major food pricing tools (those used in multiple areas and time periods) and five minor food pricing tools (those used in a single survey area or time period). Analysis demonstrated methodological differences regarding: included foods; reference households; use of availability and/or quality measures; household income sources; store sampling methods; data collection protocols; analysis methods; and results. 'Healthy' food price assessment methods used in Australia lack comparability across all metrics and most do not fully align with a 'healthy' diet as recommended by the current Australian Dietary Guidelines. None have been applied nationally. Assessment of the price, price differential and affordability of healthy (recommended) and current (unhealthy) diets would provide more robust and meaningful data to inform health and fiscal policy in Australia. The INFORMAS 'optimal' approach provides a potential framework for development of these methods.

  18. Memetic Approaches for Optimizing Hidden Markov Models: A Case Study in Time Series Prediction

    NASA Astrophysics Data System (ADS)

    Bui, Lam Thu; Barlow, Michael

    We propose a methodology for employing memetics (local search) within the framework of evolutionary algorithms to optimize parameters of hidden markov models. With this proposal, the rate and frequency of using local search are automatically changed over time either at a population or individual level. At the population level, we allow the rate of using local search to decay over time to zero (at the final generation). At the individual level, each individual is equipped with information of when it will do local search and for how long. This information evolves over time alongside the main elements of the chromosome representing the individual.

  19. Genetic Local Search for Optimum Multiuser Detection Problem in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Wang, Shaowei; Ji, Xiaoyong

    Optimum multiuser detection (OMD) in direct-sequence code-division multiple access (DS-CDMA) systems is an NP-complete problem. In this paper, we present a genetic local search algorithm, which consists of an evolution strategy framework and a local improvement procedure. The evolution strategy searches the space of feasible, locally optimal solutions only. A fast iterated local search algorithm, which employs the proprietary characteristics of the OMD problem, produces local optima with great efficiency. Computer simulations show the bit error rate (BER) performance of the GLS outperforms other multiuser detectors in all cases discussed. The computation time is polynomial complexity in the number of users.

  20. Update on CERN Search based on SharePoint 2013

    NASA Astrophysics Data System (ADS)

    Alvarez, E.; Fernandez, S.; Lossent, A.; Posada, I.; Silva, B.; Wagner, A.

    2017-10-01

    CERN’s enterprise Search solution “CERN Search” provides a central search solution for users and CERN service providers. A total of about 20 million public and protected documents from a wide range of document collections is indexed, including Indico, TWiki, Drupal, SharePoint, JACOW, E-group archives, EDMS, and CERN Web pages. In spring 2015, CERN Search was migrated to a new infrastructure based on SharePoint 2013. In the context of this upgrade, the document pre-processing and indexing process was redesigned and generalised. The new data feeding framework allows to profit from new functionality and it facilitates the long term maintenance of the system.

  1. Development of hospital disaster resilience: conceptual framework and potential measurement.

    PubMed

    Zhong, Shuang; Clark, Michele; Hou, Xiang-Yu; Zang, Yu-Li; Fitzgerald, Gerard

    2014-11-01

    Despite 'hospital resilience' gaining prominence in recent years, it remains poorly defined. This article aims to define hospital resilience, build a preliminary conceptual framework and highlight possible approaches to measurement. Searches were conducted of the commonly used health databases to identify relevant literature and reports. Search terms included 'resilience and framework or model' or 'evaluation or assess or measure and hospital and disaster or emergency or mass casualty and resilience or capacity or preparedness or response or safety'. Articles were retrieved that focussed on disaster resilience frameworks and the evaluation of various hospital capacities. A total of 1480 potentially eligible publications were retrieved initially but the final analysis was conducted on 47 articles, which appeared to contribute to the study objectives. Four disaster resilience frameworks and 11 evaluation instruments of hospital disaster capacity were included. Hospital resilience is a comprehensive concept derived from existing disaster resilience frameworks. It has four key domains: hospital safety; disaster preparedness and resources; continuity of essential medical services; recovery and adaptation. These domains were categorised according to four criteria, namely, robustness, redundancy, resourcefulness and rapidity. A conceptual understanding of hospital resilience is essential for an intellectual basis for an integrated approach to system development. This article (1) defines hospital resilience; (2) constructs conceptual framework (including key domains); (3) proposes comprehensive measures for possible inclusion in an evaluation instrument; and (4) develops a matrix of critical issues to enhance hospital resilience to cope with future disasters. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. A Conceptual Framework for Evaluation of Public Health and Primary Care System Performance in Iran

    PubMed Central

    Jahanmehr, Nader; Rashidian, Arash; Khosravi, Ardeshir; Farzadfar, Farshad; Shariati, Mohammad; Majdzadeh, Reza; Sari, Ali Akbari; Mesdaghinia, Alireza

    2015-01-01

    Introduction: The main objective of this study was to design a conceptual framework, according to the policies and priorities of the ministry of health to evaluate provincial public health and primary care performance and to assess their share in the overall health impacts of the community. Methods: We used several tools and techniques, including system thinking, literature review to identify relevant attributes of health system performance framework and interview with the key stakeholders. The PubMed, Scopus, web of science, Google Scholar and two specialized databases of Persian language literature (IranMedex and SID) were searched using main terms and keywords. Following decision-making and collective agreement among the different stakeholders, 51 core indicators were chosen from among 602 obtained indicators in a four stage process, for monitoring and evaluation of Health Deputies. Results: We proposed a conceptual framework by identifying the performance area for Health Deputies between other determinants of health, as well as introducing a chain of results, for performance, consisting of Input, Process, Output and Outcome indicators. We also proposed 5 dimensions for measuring the performance of Health Deputies, consisting of efficiency, effectiveness, equity, access and improvement of health status. Conclusion: The proposed Conceptual Framework illustrates clearly the Health Deputies success in achieving best results and consequences of health in the country. Having the relative commitment of the ministry of health and Health Deputies at the University of Medical Sciences is essential for full implementation of this framework and providing the annual performance report. PMID:25946937

  3. Developing a pressure ulcer risk factor minimum data set and risk assessment framework.

    PubMed

    Coleman, Susanne; Nelson, E Andrea; Keen, Justin; Wilson, Lyn; McGinnis, Elizabeth; Dealey, Carol; Stubbs, Nikki; Muir, Delia; Farrin, Amanda; Dowding, Dawn; Schols, Jos M G A; Cuddigan, Janet; Berlowitz, Dan; Jude, Edward; Vowden, Peter; Bader, Dan L; Gefen, Amit; Oomens, Cees W J; Schoonhoven, Lisette; Nixon, Jane

    2014-10-01

    To agree a draft pressure ulcer risk factor Minimum Data Set to underpin the development of a new evidenced-based Risk Assessment Framework. A recent systematic review identified the need for a pressure ulcer risk factor Minimum Data Set and development and validation of an evidenced-based pressure ulcer Risk Assessment Framework. This was undertaken through the Pressure UlceR Programme Of reSEarch (RP-PG-0407-10056), funded by the National Institute for Health Research and incorporates five phases. This article reports phase two, a consensus study. Consensus study. A modified nominal group technique based on the Research and Development/University of California at Los Angeles appropriateness method. This incorporated an expert group, review of the evidence and the views of a Patient and Public Involvement service user group. Data were collected December 2010-December 2011. The risk factors and assessment items of the Minimum Data Set (including immobility, pressure ulcer and skin status, perfusion, diabetes, skin moisture, sensory perception and nutrition) were agreed. In addition, a draft Risk Assessment Framework incorporating all Minimum Data Set items was developed, comprising a two stage assessment process (screening and detailed full assessment) and decision pathways. The draft Risk Assessment Framework will undergo further design and pre-testing with clinical nurses to assess and improve its usability. It will then be evaluated in clinical practice to assess its validity and reliability. The Minimum Data Set could be used in future for large scale risk factor studies informing refinement of the Risk Assessment Framework. © 2014 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.

  4. A competency framework for librarians involved in systematic reviews.

    PubMed

    Townsend, Whitney A; Anderson, Patricia F; Ginier, Emily C; MacEachern, Mark P; Saylor, Kate M; Shipman, Barbara L; Smith, Judith E

    2017-07-01

    The project identified a set of core competencies for librarians who are involved in systematic reviews. A team of seven informationists with broad systematic review experience examined existing systematic review standards, conducted a literature search, and used their own expertise to identify core competencies and skills that are necessary to undertake various roles in systematic review projects. The team identified a total of six competencies for librarian involvement in systematic reviews: "Systematic review foundations," "Process management and communication," "Research methodology," "Comprehensive searching," "Data management," and "Reporting." Within each competency are the associated skills and knowledge pieces (indicators). Competence can be measured using an adaptation of Miller's Pyramid for Clinical Assessment, either through self-assessment or identification of formal assessment instruments. The Systematic Review Competencies Framework provides a standards-based, flexible way for librarians and organizations to identify areas of competence and areas in need of development to build capacity for systematic review integration. The framework can be used to identify or develop appropriate assessment tools and to target skill development opportunities.

  5. Motivation in pediatric motor rehabilitation: A systematic search of the literature using the self-determination theory as a conceptual framework.

    PubMed

    Meyns, Pieter; Roman de Mettelinge, Tine; van der Spank, Judith; Coussens, Marieke; Van Waelvelde, Hilde

    2017-03-09

    Motivation is suggested as an important factor in pediatric motor rehabilitation. Therefore, we reviewed the existing evidence of (motivational) motor rehabilitation paradigms, and how motivation influences rehabilitation outcome using self-determination theory as conceptual framework. PubMed and Web-of-Science databases were systematically searched until June 2015. Data were independently extracted and critiqued for quality by three authors. Studies reporting motivational aspects were included. Most studies examined new technology (e.g., virtual reality [VR]). Out of 479 records, three RCT, six case-control, and six non-comparative studies were included with mixed quality. Motivation was rarely reported. Training individualization to the child's capabilities with more variety seemed promising to increase motivation. Motivation increased when the exercises seemed helpful for daily activities. Motivation in pediatric rehabilitation should be comprehensively assessed within a theoretical framework as there are indications that motivated children have better rehabilitation outcomes, depending on the aspect of motivation.

  6. Search for sterile neutrino mixing in the muon neutrino to tau neutrino appearance channel with the OPERA detector

    NASA Astrophysics Data System (ADS)

    Di Crescenzo, A.; OPERA Collaboration

    2016-05-01

    The OPERA experiment observed ν μ → ν τ oscillations in the atmospheric sector. To this purpose the hybrid OPERA detector was exposed to the CERN Neutrinos to Gran Sasso beam from 2008 to 2012, at a distance of 730 km from the neutrino source. Charged-current interactions of ν τ were searched for through the identification of τ lepton decay topologies. The five observed ν τ interactions are consistent with the expected number of events in the standard three neutrino framework. Based on this result, new limits on the mixing parameters of a massive sterile neutrino may be set. Preliminary results of the analysis performed in the 3+1 neutrino framework are here presented.

  7. Bioenergy Knowledge Discovery Framework Fact Sheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The Bioenergy Knowledge Discovery Framework (KDF) supports the development of a sustainable bioenergy industry by providing access to a variety of data sets, publications, and collaboration and mapping tools that support bioenergy research, analysis, and decision making. In the KDF, users can search for information, contribute data, and use the tools and map interface to synthesize, analyze, and visualize information in a spatially integrated manner.

  8. Integrated framework for developing search and discrimination metrics

    NASA Astrophysics Data System (ADS)

    Copeland, Anthony C.; Trivedi, Mohan M.

    1997-06-01

    This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.

  9. A Framework for Debugging Geoscience Projects in a High Performance Computing Environment

    NASA Astrophysics Data System (ADS)

    Baxter, C.; Matott, L.

    2012-12-01

    High performance computing (HPC) infrastructure has become ubiquitous in today's world with the emergence of commercial cloud computing and academic supercomputing centers. Teams of geoscientists, hydrologists and engineers can take advantage of this infrastructure to undertake large research projects - for example, linking one or more site-specific environmental models with soft computing algorithms, such as heuristic global search procedures, to perform parameter estimation and predictive uncertainty analysis, and/or design least-cost remediation systems. However, the size, complexity and distributed nature of these projects can make identifying failures in the associated numerical experiments using conventional ad-hoc approaches both time- consuming and ineffective. To address these problems a multi-tiered debugging framework has been developed. The framework allows for quickly isolating and remedying a number of potential experimental failures, including: failures in the HPC scheduler; bugs in the soft computing code; bugs in the modeling code; and permissions and access control errors. The utility of the framework is demonstrated via application to a series of over 200,000 numerical experiments involving a suite of 5 heuristic global search algorithms and 15 mathematical test functions serving as cheap analogues for the simulation-based optimization of pump-and-treat subsurface remediation systems.

  10. How do informal information sources influence women's decision-making for birth? A meta-synthesis of qualitative studies.

    PubMed

    Sanders, Ruth A; Crozier, Kenda

    2018-01-10

    Women approach birth using various methods of preparation drawing from conventional healthcare providers alongside informal information sources (IIS) outside the professional healthcare context. An investigation of the forms in which these informal information sources are accessed and negotiated by women, and how these disconnected and often conflicting elements influence women's decision-making process for birth have yet to be evaluated. The level of antenatal preparedness women feel can have significant and long lasting implications on their birth experience and transition into motherhood and beyond. The aim of this study was to provide a deeper understanding of how informal information sources influence women's preparation for birth. Seven electronic databases were searched with predetermined search terms. No limitations were imposed for year of publication. English language studies using qualitative methods exploring women's experiences of informal information sources and their impact upon women's birth preparation were included, subject to a quality appraisal framework. Searches were initiated in February 2016 and completed by March 2016. Studies were synthesised using an interpretive meta-ethnographic approach. Fourteen studies were included for the final synthesis from Great Britain, Australia, Canada and the United States. Four main themes were identified: Menu Birth; Information Heaven/Hell; Spheres of Support; and Trust. It is evident that women do not enter pregnancy as empty vessels devoid of a conceptual framework, but rather have a pre-constructed embodied knowledge base upon which other information is superimposed. Allied to this, it is clear that informal information was sought to mitigate against the widespread experience of discordant information provided by maternity professionals. Women's access to the deluge of informal information sources in mainstream media during pregnancy have significant impact on decision making for birth. These informal sources redefine the power dynamic between women and maternal healthcare providers, simultaneously increasing levels of anxiety and challenging women's pre-existing ideations and aspirations of personal birth processes. A lack of awareness by some professionals of women's information seeking behaviours generates barriers to women-centred support, leaving an experience expectation mismatch unchecked. CRD42016041491 17/06/16.

  11. Analyses of infectious disease patterns and drivers largely lack insights from social epidemiology: contemporary patterns and future opportunities

    PubMed Central

    Noppert, Grace A; Kubale, John T; Wilson, Mark L

    2017-01-01

    Background Infectious disease epidemiologists have long recognised the importance of social variables as drivers of epidemics and disease risk, yet few apply analytic approaches from social epidemiology. We quantified and evaluated the extent to which recent infectious disease research is employing the perspectives and methods of social epidemiology by replicating the methodology used by Cohen et al in a 2007 study. Methods 2 search strategies were used to identify and review articles published from 1 January 2005 to 31 December 2013. First, we performed a keyword search of ‘social epidemiology’ in the title/abstract/text of published studies identified in PubMed, PsychInfo and ISI Web of Science, and classified each study as pertaining to infectious, non-infectious or other outcomes. A second PubMed search identified articles that were cross-referenced under non-infectious or infectious, and search terms relating to social variables. The abstracts of all articles were read, classified and examined to identify patterns over time. Results Findings suggest that infectious disease research publications that explicitly or implicitly incorporate social epidemiological approaches have stagnated in recent years. While the number of publications that were explicitly self-classified as ‘social epidemiology’ has risen, the proportion that investigated infectious disease outcomes has declined. Furthermore, infectious diseases accounted for the smallest proportion of articles that were cross-referenced with Medical Subject Headings (MeSH) terms related to social factors, and most of these involved sexually transmitted diseases. Conclusions The current landscape of infectious disease epidemiology could benefit from new approaches to understanding how the social and biophysical environment sustains transmission and exacerbates disparities. The framework of social epidemiology provides infectious disease researchers with such a perspective and research opportunity. PMID:27799618

  12. Never Use the Complete Search Space: a Concept to Enhance the Optimization Procedure for Monitoring Networks

    NASA Astrophysics Data System (ADS)

    Bode, F.; Reuschen, S.; Nowak, W.

    2015-12-01

    Drinking-water well catchments include many potential sources of contaminations like gas stations or agriculture. Finding optimal positions of early-warning monitoring wells is challenging because there are various parameters (and their uncertainties) that influence the reliability and optimality of any suggested monitoring location or monitoring network.The overall goal of this project is to develop and establish a concept to assess, design and optimize early-warning systems within well catchments. Such optimal monitoring networks need to optimize three competing objectives: a high detection probability, which can be reached by maximizing the "field of vision" of the monitoring network, a long early-warning time such that there is enough time left to install counter measures after first detection, and the overall operating costs of the monitoring network, which should ideally be reduced to a minimum. The method is based on numerical simulation of flow and transport in heterogeneous porous media coupled with geostatistics and Monte-Carlo, scenario analyses for real data, respectively, wrapped up within the framework of formal multi-objective optimization using a genetic algorithm.In order to speed up the optimization process and to better explore the Pareto-front, we developed a concept that forces the algorithm to search only in regions of the search space where promising solutions can be expected. We are going to show how to define these regions beforehand, using knowledge of the optimization problem, but also how to define them independently of problem attributes. With that, our method can be used with and/or without detailed knowledge of the objective functions.In summary, our study helps to improve optimization results in less optimization time by meaningful restrictions of the search space. These restrictions can be done independently of the optimization problem, but also in a problem-specific manner.

  13. Using the Knowledge to Action Framework in practice: a citation analysis and systematic review.

    PubMed

    Field, Becky; Booth, Andrew; Ilott, Irene; Gerrish, Kate

    2014-11-23

    Conceptual frameworks are recommended as a way of applying theory to enhance implementation efforts. The Knowledge to Action (KTA) Framework was developed in Canada by Graham and colleagues in the 2000s, following a review of 31 planned action theories. The framework has two components: Knowledge Creation and an Action Cycle, each of which comprises multiple phases. This review sought to answer two questions: 'Is the KTA Framework used in practice? And if so, how?' This study is a citation analysis and systematic review. The index citation for the original paper was identified on three databases-Web of Science, Scopus and Google Scholar-with the facility for citation searching. Limitations of English language and year of publication 2006-June 2013 were set. A taxonomy categorising the continuum of usage was developed. Only studies applying the framework to implementation projects were included. Data were extracted and mapped against each phase of the framework for studies where it was integral to the implementation project. The citation search yielded 1,787 records. A total of 1,057 titles and abstracts were screened. One hundred and forty-six studies described usage to varying degrees, ranging from referenced to integrated. In ten studies, the KTA Framework was integral to the design, delivery and evaluation of the implementation activities. All ten described using the Action Cycle and seven referred to Knowledge Creation. The KTA Framework was enacted in different health care and academic settings with projects targeted at patients, the public, and nursing and allied health professionals. The KTA Framework is being used in practice with varying degrees of completeness. It is frequently cited, with usage ranging from simple attribution via a reference, through informing planning, to making an intellectual contribution. When the framework was integral to knowledge translation, it guided action in idiosyncratic ways and there was theory fidelity. Prevailing wisdom encourages the use of theories, models and conceptual frameworks, yet their application is less evident in practice. This may be an artefact of reporting, indicating that prospective, primary research is needed to explore the real value of the KTA Framework and similar tools.

  14. Methods for Specifying the Target Difference in a Randomised Controlled Trial: The Difference ELicitation in TriAls (DELTA) Systematic Review

    PubMed Central

    Hislop, Jenni; Adewuyi, Temitope E.; Vale, Luke D.; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G.; Briggs, Andrew H.; Fayers, Peter; Ramsay, Craig R.; Norrie, John D.; Harvey, Ian M.; Buckley, Brian; Cook, Jonathan A.

    2014-01-01

    Background Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. Methods and Findings A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified—anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. Conclusions A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts. Please see later in the article for the Editors' Summary PMID:24824338

  15. Performance Assessment of Communicable Disease Surveillance in Disasters: A Systematic Review

    PubMed Central

    Babaie, Javad; Ardalan, Ali; Vatandoost, Hasan; Goya, Mohammad Mehdi; Akbarisari, Ali

    2015-01-01

    Background: This study aimed to identify the indices and frameworks that have been used to assess the performance of communicable disease surveillance (CDS) in response to disasters and other emergencies, including infectious disease outbreaks. Method: In this systematic review, PubMed, Google Scholar, Scopus, ScienceDirect, ProQuest databases and grey literature were searched until the end of 2013. All retrieved titles were examined in accordance with inclusion criteria. Abstracts of the relevant titles were reviewed and eligible abstracts were included in a list for data abstraction. Finally, the study variables were extracted. Results: Sixteen articles and one book were found relevant to our study objectives. In these articles, 31 criteria and 35 indicators were used or suggested for the assessment/evaluation of the performance of surveillance systems in disasters. The Centers for Disease Control (CDC) updated guidelines for the evaluation of public health surveillance systems were the most widely used. Conclusion: Despite the importance of performance assessment in improving CDS in response to disasters, there is a lack of clear and accepted frameworks. There is also no agreement on the use of existing criteria and indices. The only relevant framework is the CDC guideline, which is a common framework for assessing public health surveillance systems as a whole. There is an urgent need to develop appropriate frameworks, criteria, and indices for specifically assessing the performance of CDS in response to disasters and other emergencies, including infectious diseases outbreaks. Key words: Disasters, Emergencies, Communicable Diseases, Surveillance System, Performance Assessment PMID:25774323

  16. Development of a Clinical Framework for Mirror Therapy in Patients with Phantom Limb Pain: An Evidence-based Practice Approach.

    PubMed

    Rothgangel, Andreas; Braun, Susy; de Witte, Luc; Beurskens, Anna; Smeets, Rob

    2016-04-01

    To describe the development and content of a clinical framework for mirror therapy (MT) in patients with phantom limb pain (PLP) following amputation. Based on an a priori formulated theoretical model, 3 sources of data collection were used to develop the clinical framework. First, a review of the literature took place on important clinical aspects and the evidence on the effectiveness of MT in patients with phantom limb pain. In addition, questionnaires and semi-structured interviews were used to analyze clinical experiences and preferences of physical and occupational therapists and patients suffering from PLP regarding the application of MT. All data were finally clustered into main and subcategories and were used to complement and refine the theoretical model. For every main category of the a priori formulated theoretical model, several subcategories emerged from the literature search, patient, and therapist interviews. Based on these categories, we developed a clinical flowchart that incorporates the main and subcategories in a logical way according to the phases in methodical intervention defined by the Royal Dutch Society for Physical Therapy. In addition, we developed a comprehensive booklet that illustrates the individual steps of the clinical flowchart. In this study, a structured clinical framework for the application of MT in patients with PLP was developed. This framework is currently being tested for its effectiveness in a multicenter randomized controlled trial. © 2015 World Institute of Pain.

  17. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    PubMed

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-06-12

    Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.

  18. Drivers’ Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving

    PubMed Central

    Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan

    2016-01-01

    This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers’ visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms. PMID:26784203

  19. Drivers' Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving.

    PubMed

    Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan

    2016-01-15

    This paper describes a real-time motion planner based on the drivers' visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers' visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers' visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms.

  20. Use of Action Research in Nursing Education

    PubMed Central

    Pehler, Shelley-Rae; Stombaugh, Angela

    2016-01-01

    Purpose. The purpose of this article is to describe action research in nursing education and to propose a definition of action research for providing guidelines for research proposals and criteria for assessing potential publications for nursing higher education. Methods. The first part of this project involved a search of the literature on action research in nursing higher education from 1994 to 2013. Searches were conducted in the CINAHL and MEDLINE databases. Applying the criteria identified, 80 publications were reviewed. The second part of the project involved a literature review of action research methodology from several disciplines to assist in assessing articles in this review. Results. This article summarizes the nursing higher education literature reviewed and provides processes and content related to four topic areas in nursing higher education. The descriptions assist researchers in learning more about the complexity of both the action research process and the varied outcomes. The literature review of action research in many disciplines along with the review of action research in higher education provided a framework for developing a nursing-education-centric definition of action research. Conclusions. Although guidelines for developing action research and criteria for publication are suggested, continued development of methods for synthesizing action research is recommended. PMID:28078138

  1. Consistent searches for SMEFT effects in non-resonant dijet events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alte, Stefan; Konig, Matthias; Shepherd, William

    Here, we investigate the bounds which can be placed on generic new-physics contributions to dijet production at the LHC using the framework of the Standard Model Effective Field Theory, deriving the first consistently-treated EFT bounds from non-resonant high-energy data. We recast an analysis searching for quark compositeness, equivalent to treating the SM with one higher-dimensional operator as a complete UV model. In order to reach consistent, model-independent EFT conclusions, it is necessary to truncate the EFT effects consistently at ordermore » $$1/\\Lambda^2$$ and to include the possibility of multiple operators simultaneously contributing to the observables, neither of which has been done in previous searches of this nature. Furthermore, it is important to give consistent error estimates for the theoretical predictions of the signal model, particularly in the region of phase space where the probed energy is approaching the cutoff scale of the EFT. There are two linear combinations of operators which contribute to dijet production in the SMEFT with distinct angular behavior; we identify those linear combinations and determine the ability of LHC searches to constrain them simultaneously. Consistently treating the EFT generically leads to weakened bounds on new-physics parameters. These constraints will be a useful input to future global analyses in the SMEFT framework, and the techniques used here to consistently search for EFT effects are directly applicable to other off-resonance signals.« less

  2. Consistent searches for SMEFT effects in non-resonant dijet events

    DOE PAGES

    Alte, Stefan; Konig, Matthias; Shepherd, William

    2018-01-19

    Here, we investigate the bounds which can be placed on generic new-physics contributions to dijet production at the LHC using the framework of the Standard Model Effective Field Theory, deriving the first consistently-treated EFT bounds from non-resonant high-energy data. We recast an analysis searching for quark compositeness, equivalent to treating the SM with one higher-dimensional operator as a complete UV model. In order to reach consistent, model-independent EFT conclusions, it is necessary to truncate the EFT effects consistently at ordermore » $$1/\\Lambda^2$$ and to include the possibility of multiple operators simultaneously contributing to the observables, neither of which has been done in previous searches of this nature. Furthermore, it is important to give consistent error estimates for the theoretical predictions of the signal model, particularly in the region of phase space where the probed energy is approaching the cutoff scale of the EFT. There are two linear combinations of operators which contribute to dijet production in the SMEFT with distinct angular behavior; we identify those linear combinations and determine the ability of LHC searches to constrain them simultaneously. Consistently treating the EFT generically leads to weakened bounds on new-physics parameters. These constraints will be a useful input to future global analyses in the SMEFT framework, and the techniques used here to consistently search for EFT effects are directly applicable to other off-resonance signals.« less

  3. Maternal parental self-efficacy in the postpartum period.

    PubMed

    Leahy-Warren, Patricia; McCarthy, Geraldine

    2011-12-01

    To present an integrated literature review on maternal parental self-efficacy (MPSE) in the postpartum period. A literature search of CINAHL with full text and MEDLINE and PsycINFO from their start dates to February 2010. Inclusion criteria were English written research articles which reported the measurement of MPSE in the postpartum period. Articles were reviewed based on purpose, theoretical framework, data collection method, sample, main findings and nursing implications for maternal parenting. In addition, data related to the instruments that were used to measure MPSE were included. Data revealed is a statistically significant increase in MPSE over time from baseline; a positive relationship between MPSE and number of children, social support, maternal parenting satisfaction and marital satisfaction; and a negative relationship between MPSE and maternal stress, anxiety and postpartum depression. A variety of instruments to measure MPSE were used but the majority were based on Bandura's framework. Findings from this review may assist women's health researchers and clinical nurses/midwives in assessing and developing appropriate interventions for increasing risk awareness, enhancing MPSE and subsequent satisfaction with parenting and emotional well-being. Further research is necessary underpinned by theoretical frameworks using domain-specific instruments to identify predictors of MPSE. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. From built environment to health inequalities: An explanatory framework based on evidence

    PubMed Central

    Gelormino, Elena; Melis, Giulia; Marietta, Cristina; Costa, Giuseppe

    2015-01-01

    Objective: The Health in All Policies strategy aims to engage every policy domain in health promotion. The more socially disadvantaged groups are usually more affected by potential negative impacts of policies if they are not health oriented. The built environment represents an important policy domain and, apart from its housing component, its impact on health inequalities is seldom assessed. Methods: A scoping review of evidence on the built environment and its health equity impact was carried out, searching both urban and medical literature since 2000 analysing socio-economic inequalities in relation to different components of the built environment. Results: The proposed explanatory framework assumes that key features of built environment (identified as density, functional mix and public spaces and services), may influence individual health through their impact on both natural environment and social context, as well as behaviours, and that these effects may be unequally distributed according to the social position of individuals. Conclusion: In general, the expected links proposed by the framework are well documented in the literature; however, evidence of their impact on health inequalities remains uncertain due to confounding factors, heterogeneity in study design, and difficulty to generalize evidence that is still very embedded to local contexts. PMID:26844145

  5. Automated detection of hospital outbreaks: A systematic review of methods.

    PubMed

    Leclère, Brice; Buckeridge, David L; Boëlle, Pierre-Yves; Astagneau, Pascal; Lepelletier, Didier

    2017-01-01

    Several automated algorithms for epidemiological surveillance in hospitals have been proposed. However, the usefulness of these methods to detect nosocomial outbreaks remains unclear. The goal of this review was to describe outbreak detection algorithms that have been tested within hospitals, consider how they were evaluated, and synthesize their results. We developed a search query using keywords associated with hospital outbreak detection and searched the MEDLINE database. To ensure the highest sensitivity, no limitations were initially imposed on publication languages and dates, although we subsequently excluded studies published before 2000. Every study that described a method to detect outbreaks within hospitals was included, without any exclusion based on study design. Additional studies were identified through citations in retrieved studies. Twenty-nine studies were included. The detection algorithms were grouped into 5 categories: simple thresholds (n = 6), statistical process control (n = 12), scan statistics (n = 6), traditional statistical models (n = 6), and data mining methods (n = 4). The evaluation of the algorithms was often solely descriptive (n = 15), but more complex epidemiological criteria were also investigated (n = 10). The performance measures varied widely between studies: e.g., the sensitivity of an algorithm in a real world setting could vary between 17 and 100%. Even if outbreak detection algorithms are useful complementary tools for traditional surveillance, the heterogeneity in results among published studies does not support quantitative synthesis of their performance. A standardized framework should be followed when evaluating outbreak detection methods to allow comparison of algorithms across studies and synthesis of results.

  6. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  7. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    NASA Astrophysics Data System (ADS)

    Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  8. Medical devices early assessment methods: systematic literature review.

    PubMed

    Markiewicz, Katarzyna; van Til, Janine A; IJzerman, Maarten J

    2014-04-01

    The aim of this study was to get an overview of current theory and practice in early assessments of medical devices, and to identify aims and uses of early assessment methods used in practice. A systematic literature review was conducted in September 2013, using computerized databases (PubMed, Science Direct, and Scopus), and references list search. Selected articles were categorized based on their type, objective, and main target audience. The methods used in the application studies were extracted and mapped throughout the early stages of development and for their particular aims. Of 1,961 articles identified, eighty-three studies passed the inclusion criteria, and thirty were included by searching reference lists. There were thirty-one theoretical papers, and eighty-two application papers included. Most studies investigated potential applications/possible improvement of medical devices, developed early assessment framework or included stakeholder perspective in early development stages. Among multiple qualitative and quantitative methods identified, only few were used more than once. The methods aim to inform strategic considerations (e.g., literature review), economic evaluation (e.g., cost-effectiveness analysis), and clinical effectiveness (e.g., clinical trials). Medical devices were often in the prototype product development stage, and the results were usually aimed at informing manufacturers. This study showed converging aims yet widely diverging methods for early assessment during medical device development. For early assessment to become an integral part of activities in the development of medical devices, methods need to be clarified and standardized, and the aims and value of assessment itself must be demonstrated to the main stakeholders for assuring effective and efficient medical device development.

  9. From Spotlight to Fluorescent Bulb: Aesthetic Dimensions of Personal, Practical Knowledge in an Actor Training to Be a High School Teacher

    ERIC Educational Resources Information Center

    Dobson, Darrell

    2005-01-01

    This paper can be conceived as one story embedded in a second story, in which the "outer" narrative, involving the theoretical and methodological framework, is that of my search for a means of defining, articulating and implementing an aesthetic epistemology in both academic research and in teacher education/development (a search that is…

  10. Contextual Guidance of Eye Movements and Attention in Real-World Scenes: The Role of Global Features in Object Search

    ERIC Educational Resources Information Center

    Torralba, Antonio; Oliva, Aude; Castelhano, Monica S.; Henderson, John M.

    2006-01-01

    Many experiments have shown that the human visual system makes extensive use of contextual information for facilitating object search in natural scenes. However, the question of how to formally model contextual influences is still open. On the basis of a Bayesian framework, the authors present an original approach of attentional guidance by global…

  11. Bruxism in prospective studies of veneered zirconia restorations-a systematic review.

    PubMed

    Schmitter, Marc; Boemicke, Wolfgang; Stober, Thomas

    2014-01-01

    The objectives of this work were to systematically review the effect of bruxism on the survival of zirconia restorations on teeth and to assess the prevalence of nocturnal masseter muscle activity in a clinical sample. A Medline search was performed independently and in triplicate using the term "zirconia" and activating the filter "clinical trial." Furthermore, three other electronic databases were searched using the same term. Only papers published in English on prospective studies of veneered zirconia frameworks on teeth were included. To estimate the prevalence of sleep bruxism in clinical settings, subjects with no clinical signs of bruxism and who did not report grinding and/or clenching were examined by use of a disposable electromyographic device. The initial search resulted in 107 papers, of which 22 were included in the analysis. Bruxers were excluded in 20 of these articles. In 1 study bruxers were not excluded, and 1 study did not provide information regarding this issue. The methods used to identify bruxers were heterogeneous/not described, and no study used reliable, valid methods. Of 33 subjects without clinical signs of bruxism, nocturnal muscle activity exceeded predefined muscle activity for 63.8% of the subjects. There is a lack of information about the effect of bruxism on the incidence of technical failure of veneered zirconia restorations because all available studies failed to use suitable instruments for diagnosis of bruxism. Nocturnal muscle activity without clinical symptoms/report of bruxism was observed for a relevant number of patients.

  12. Standard Biological Parts Knowledgebase

    PubMed Central

    Galdzicki, Michal; Rodriguez, Cesar; Chandran, Deepak; Sauro, Herbert M.; Gennari, John H.

    2011-01-01

    We have created the Knowledgebase of Standard Biological Parts (SBPkb) as a publically accessible Semantic Web resource for synthetic biology (sbolstandard.org). The SBPkb allows researchers to query and retrieve standard biological parts for research and use in synthetic biology. Its initial version includes all of the information about parts stored in the Registry of Standard Biological Parts (partsregistry.org). SBPkb transforms this information so that it is computable, using our semantic framework for synthetic biology parts. This framework, known as SBOL-semantic, was built as part of the Synthetic Biology Open Language (SBOL), a project of the Synthetic Biology Data Exchange Group. SBOL-semantic represents commonly used synthetic biology entities, and its purpose is to improve the distribution and exchange of descriptions of biological parts. In this paper, we describe the data, our methods for transformation to SBPkb, and finally, we demonstrate the value of our knowledgebase with a set of sample queries. We use RDF technology and SPARQL queries to retrieve candidate “promoter” parts that are known to be both negatively and positively regulated. This method provides new web based data access to perform searches for parts that are not currently possible. PMID:21390321

  13. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  14. Prospects of detection of the first sources with SKA using matched filters

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Choudhury, T. Roy; Datta, Kanan K.; Mellema, Garrelt; Choudhuri, Samir; Majumdar, Suman; Giri, Sambit K.

    2018-05-01

    The matched filtering technique is an efficient method to detect H ii bubbles and absorption regions in radio interferometric observations of the redshifted 21-cm signal from the epoch of reionization and the Cosmic Dawn. Here, we present an implementation of this technique to the upcoming observations such as the SKA1-low for a blind search of absorption regions at the Cosmic Dawn. The pipeline explores four dimensional parameter space on the simulated mock visibilities using a MCMC algorithm. The framework is able to efficiently determine the positions and sizes of the absorption/H ii regions in the field of view.

  15. Representations of African Americans in the Grief and Mourning Literature from 1998 to 2014: A Systematic Review.

    PubMed

    Granek, Leeat; Peleg-Sagy, Tal

    2015-01-01

    The authors examined representations of African Americans in the grief literature to assess (a) frequencies; (b) content; and (c) use of universalist or a contextualized framework. They conducted searches in 3 databases that target the grief literature published in the last 15 years. Fifty-nine articles met the criteria. There are a small number of studies published on African Americans and these tend to focus on homicide. Many studies had incomplete methods. Comparison studies were common and pathological grief outcomes that were validated on White populations were used as outcome variables with African American participants.

  16. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  17. Exploiting visual search theory to infer social interactions

    NASA Astrophysics Data System (ADS)

    Rota, Paolo; Dang-Nguyen, Duc-Tien; Conci, Nicola; Sebe, Nicu

    2013-03-01

    In this paper we propose a new method to infer human social interactions using typical techniques adopted in literature for visual search and information retrieval. The main piece of information we use to discriminate among different types of interactions is provided by proxemics cues acquired by a tracker, and used to distinguish between intentional and casual interactions. The proxemics information has been acquired through the analysis of two different metrics: on the one hand we observe the current distance between subjects, and on the other hand we measure the O-space synergy between subjects. The obtained values are taken at every time step over a temporal sliding window, and processed in the Discrete Fourier Transform (DFT) domain. The features are eventually merged into an unique array, and clustered using the K-means algorithm. The clusters are reorganized using a second larger temporal window into a Bag Of Words framework, so as to build the feature vector that will feed the SVM classifier.

  18. The impact of nurse prescribing on the clinical setting.

    PubMed

    Creedon, Rena; Byrne, Stephen; Kennedy, Julia; McCarthy, Suzanne

    To investigate the impact nurse prescribing has on the organisation, patient and health professional, and to identify factors associated with the growth of nurse prescribing. Systematic search and narrative review. Data obtained through CINAHL, PubMed, Science direct, Online Computer Library Centre (OCLC), databases/websites, and hand searching. English peer-reviewed quantitative, qualitative and mixed-method articles published from September 2009 through to August 2014 exploring nurse prescribing from the perspective of the organisation, health professional and patient were included. Following a systematic selection process, studies identified were also assessed for quality by applying Cardwell's framework. From the initial 443 citations 37 studies were included in the review. Most studies were descriptive in nature. Commonalities addressed were stakeholders' views, prescribing in practice, jurisdiction, education and benefits/barriers. Prescriptive authority for nurses continues to be a positive addition to clinical practice. However, concerns have emerged regarding appropriate support, relationships and jurisdictional issues. A more comprehensive understanding of nurse and midwife prescribing workloads is required to capture the true impact and cost-effectiveness of the initiative.

  19. Part-based deep representation for product tagging and search

    NASA Astrophysics Data System (ADS)

    Chen, Keqing

    2017-06-01

    Despite previous studies, tagging and indexing the product images remain challenging due to the large inner-class variation of the products. In the traditional methods, the quantized hand-crafted features such as SIFTs are extracted as the representation of the product images, which are not discriminative enough to handle the inner-class variation. For discriminative image representation, this paper firstly presents a novel deep convolutional neural networks (DCNNs) architect true pre-trained on a large-scale general image dataset. Compared to the traditional features, our DCNNs representation is of more discriminative power with fewer dimensions. Moreover, we incorporate the part-based model into the framework to overcome the negative effect of bad alignment and cluttered background and hence the descriptive ability of the deep representation is further enhanced. Finally, we collect and contribute a well-labeled shoe image database, i.e., the TBShoes, on which we apply the part-based deep representation for product image tagging and search, respectively. The experimental results highlight the advantages of the proposed part-based deep representation.

  20. A Patch-Based Approach for the Segmentation of Pathologies: Application to Glioma Labelling.

    PubMed

    Cordier, Nicolas; Delingette, Herve; Ayache, Nicholas

    2016-04-01

    In this paper, we describe a novel and generic approach to address fully-automatic segmentation of brain tumors by using multi-atlas patch-based voting techniques. In addition to avoiding the local search window assumption, the conventional patch-based framework is enhanced through several simple procedures: an improvement of the training dataset in terms of both label purity and intensity statistics, augmented features to implicitly guide the nearest-neighbor-search, multi-scale patches, invariance to cube isometries, stratification of the votes with respect to cases and labels. A probabilistic model automatically delineates regions of interest enclosing high-probability tumor volumes, which allows the algorithm to achieve highly competitive running time despite minimal processing power and resources. This method was evaluated on Multimodal Brain Tumor Image Segmentation challenge datasets. State-of-the-art results are achieved, with a limited learning stage thus restricting the risk of overfit. Moreover, segmentation smoothness does not involve any post-processing.

  1. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm.

    PubMed

    Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C

    2005-10-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  2. Finding viable models in SUSY parameter spaces with signal specific discovery potential

    NASA Astrophysics Data System (ADS)

    Burgess, Thomas; Lindroos, Jan Øye; Lipniacka, Anna; Sandaker, Heidi

    2013-08-01

    Recent results from ATLAS giving a Higgs mass of 125.5 GeV, further constrain already highly constrained supersymmetric models such as pMSSM or CMSSM/mSUGRA. As a consequence, finding potentially discoverable and non-excluded regions of model parameter space is becoming increasingly difficult. Several groups have invested large effort in studying the consequences of Higgs mass bounds, upper limits on rare B-meson decays, and limits on relic dark matter density on constrained models, aiming at predicting superpartner masses, and establishing likelihood of SUSY models compared to that of the Standard Model vis-á-vis experimental data. In this paper a framework for efficient search for discoverable, non-excluded regions of different SUSY spaces giving specific experimental signature of interest is presented. The method employs an improved Markov Chain Monte Carlo (MCMC) scheme exploiting an iteratively updated likelihood function to guide search for viable models. Existing experimental and theoretical bounds as well as the LHC discovery potential are taken into account. This includes recent bounds on relic dark matter density, the Higgs sector and rare B-mesons decays. A clustering algorithm is applied to classify selected models according to expected phenomenology enabling automated choice of experimental benchmarks and regions to be used for optimizing searches. The aim is to provide experimentalist with a viable tool helping to target experimental signatures to search for, once a class of models of interest is established. As an example a search for viable CMSSM models with τ-lepton signatures observable with the 2012 LHC data set is presented. In the search 105209 unique models were probed. From these, ten reference benchmark points covering different ranges of phenomenological observables at the LHC were selected.

  3. Multidisciplinary Environments: A History of Engineering Framework Development

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Gillian, Ronnie E.

    2006-01-01

    This paper traces the history of engineering frameworks and their use by Multidisciplinary Design Optimization (MDO) practitioners. The approach is to reference papers that have been presented at one of the ten previous Multidisciplinary Analysis and Optimization (MA&O) conferences. By limiting the search to MA&O papers, the authors can (1) identify the key ideas that led to general purpose MDO frameworks and (2) uncover roadblocks that delayed the development of these ideas. The authors make no attempt to assign credit for revolutionary ideas or to assign blame for missed opportunities. Rather, the goal is to trace the various threads of computer architecture and software framework research and to observe how these threads contributed to the commercial framework products available today.

  4. Design of a flexible component gathering algorithm for converting cell-based models to graph representations for use in evolutionary search

    PubMed Central

    2014-01-01

    Background The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism. Results We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB. Conclusion The work presented here, including our algorithm for converting cell-based models into graphs for comparison with data stored in an external data repository, has made feasible the automated development, training, and validation of computational models using morphology-based data. This work is part of an ongoing project to automate the search process, which will greatly expand our ability to identify, consider, and test biological mechanisms in the field of regenerative biology. PMID:24917489

  5. Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades

    NASA Astrophysics Data System (ADS)

    Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang

    2017-12-01

    This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.

  6. CONORBIT: constrained optimization by radial basis function interpolation in trust regions

    DOE PAGES

    Regis, Rommel G.; Wild, Stefan M.

    2016-09-26

    Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less

  7. Effective 2D-3D medical image registration using Support Vector Machine.

    PubMed

    Qi, Wenyuan; Gu, Lixu; Zhao, Qiang

    2008-01-01

    Registration of pre-operative 3D volume dataset and intra-operative 2D images gradually becomes an important technique to assist radiologists in diagnosing complicated diseases easily and quickly. In this paper, we proposed a novel 2D/3D registration framework based on Support Vector Machine (SVM) to compensate the disadvantages of generating large number of DRR images in the stage of intra-operation. Estimated similarity metric distribution could be built up from the relationship between parameters of transform and prior sparse target metric values by means of SVR method. Based on which, global optimal parameters of transform are finally searched out by an optimizer in order to guide 3D volume dataset to match intra-operative 2D image. Experiments reveal that our proposed registration method improved performance compared to conventional registration method and also provided a precise registration result efficiently.

  8. Analysis of United States Marine Corps Operations in Support of Humanitarian Assistance and Disaster Relief

    DTIC Science & Technology

    2013-12-01

    reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of...capabilities of the USMC MEU that satisfy demands arising from natural disasters. We follow the humanitarian and military core competencies framework for...satisfy demands arising from natural disasters. We follow the humanitarian and military core competencies framework for studying the USMC

  9. Presentations - Herriott, T.M. and others, 2015 | Alaska Division of

    Science.gov Websites

    Details Title: Sequence stratigraphic framework of the Upper Jurassic Naknek Formation, Cook Inlet forearc Resident Business in Alaska Visiting Alaska State Employees DGGS State of Alaska search Alaska Division of ., Wartes, M.A., and Decker, P.L., 2015, Sequence stratigraphic framework of the Upper Jurassic Naknek

  10. Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2006-01-01

    Genetic and evolutionary algorithms have been applied to solve numerous problems in engineering design where they have been used primarily as optimization procedures. These methods have an advantage over conventional gradient-based search procedures became they are capable of finding global optima of multi-modal functions and searching design spaces with disjoint feasible regions. They are also robust in the presence of noisy data. Another desirable feature of these methods is that they can efficiently use distributed and parallel computing resources since multiple function evaluations (flow simulations in aerodynamics design) can be performed simultaneously and independently on ultiple processors. For these reasons genetic and evolutionary algorithms are being used more frequently in design optimization. Examples include airfoil and wing design and compressor and turbine airfoil design. They are also finding increasing use in multiple-objective and multidisciplinary optimization. This lecture will focus on an evolutionary method that is a relatively new member to the general class of evolutionary methods called differential evolution (DE). This method is easy to use and program and it requires relatively few user-specified constants. These constants are easily determined for a wide class of problems. Fine-tuning the constants will off course yield the solution to the optimization problem at hand more rapidly. DE can be efficiently implemented on parallel computers and can be used for continuous, discrete and mixed discrete/continuous optimization problems. It does not require the objective function to be continuous and is noise tolerant. DE and applications to single and multiple-objective optimization will be included in the presentation and lecture notes. A method for aerodynamic design optimization that is based on neural networks will also be included as a part of this lecture. The method offers advantages over traditional optimization methods. It is more flexible than other methods in dealing with design in the context of both steady and unsteady flows, partial and complete data sets, combined experimental and numerical data, inclusion of various constraints and rules of thumb, and other issues that characterize the aerodynamic design process. Neural networks provide a natural framework within which a succession of numerical solutions of increasing fidelity, incorporating more realistic flow physics, can be represented and utilized for optimization. Neural networks also offer an excellent framework for multiple-objective and multi-disciplinary design optimization. Simulation tools from various disciplines can be integrated within this framework and rapid trade-off studies involving one or many disciplines can be performed. The prospect of combining neural network based optimization methods and evolutionary algorithms to obtain a hybrid method with the best properties of both methods will be included in this presentation. Achieving solution diversity and accurate convergence to the exact Pareto front in multiple objective optimization usually requires a significant computational effort with evolutionary algorithms. In this lecture we will also explore the possibility of using neural networks to obtain estimates of the Pareto optimal front using non-dominated solutions generated by DE as training data. Neural network estimators have the potential advantage of reducing the number of function evaluations required to obtain solution accuracy and diversity, thus reducing cost to design.

  11. Systematic framework to evaluate the status of physical activity research for persons with multiple sclerosis.

    PubMed

    Dixon-Ibarra, Alicia; Vanderbom, Kerri; Dugala, Anisia; Driver, Simon

    2014-04-01

    Exploring the current state of health behavior research for individuals with multiple sclerosis is essential to understanding the next steps required to reducing preventable disability. A way to link research to translational health promotion programs is by utilizing the Behavioral Epidemiological Framework, which describes a sequence of phases used to categorize health-related behavioral research. This critical audit of the literature examines the current state of physical activity research for persons with multiple sclerosis by utilizing the proposed Behavioral Epidemiological Framework. After searching MEDLINE, PUBMED, PsycINFO, Google Scholar and several major areas within EBSCOHOST (2000 to present), retrieved articles were categorized according to the framework phases and coding rules. Of 139 articles, 49% were in phase 1 (establishing links between behavior and health), 18% phase 2 (developing methods for measuring behavior), 24% phase 3 (identifying factors influencing behavior and implications for theory), and 9% phase 4 and 5 (evaluating interventions to change behavior and translating research into practice). Emphasis on phase 1 research indicates the field is in its early stages of development. Providing those with multiple sclerosis with necessary tools through health promotion programs is needed to reduce secondary conditions and co-morbidities. Reassessment of the field of physical activity and multiple sclerosis in the future could provide insight into whether the field is evolving over time or remaining stagnant. Published by Elsevier Inc.

  12. The game of active search for extra-terrestrial intelligence: breaking the `Great Silence'

    NASA Astrophysics Data System (ADS)

    de Vladar, Harold P.

    2013-01-01

    The search for extra-terrestrial intelligence (SETI) has been performed principally as a one-way survey, listening of radio frequencies across the Milky Way and other galaxies. However, scientists have engaged in an active messaging only rarely. This suggests the simple rationale that if other civilizations exist and take a similar approach to ours, namely listening but not broadcasting, the result is a silent universe. A simple game theoretical model, the prisoner's dilemma, explains this situation: each player (civilization) can passively search (defect), or actively search and broadcast (cooperate). In order to maximize the payoff (or, equivalently, minimize the risks) the best strategy is not to broadcast. In fact, the active search has been opposed on the basis that it might be dangerous to expose ourselves. However, most of these ideas have not been based on objective arguments, and ignore accounting of the possible gains and losses. Thus, the question stands: should we perform an active search? I develop a game-theoretical framework where civilizations can be of different types, and explicitly apply it to a situation where societies are either interested in establishing a two-way communication or belligerent and in urge to exploit ours. The framework gives a quantitative solution (a mixed-strategy), which is how frequent we should perform the active SETI. This frequency is roughly proportional to the inverse of the risk, and can be extremely small. However, given the immense amount of stars being scanned, it supports active SETI. The model is compared with simulations, and the possible actions are evaluated through the San Marino scale, measuring the risks of messaging.

  13. Evidence-based practice: extending the search to find material for the systematic review

    PubMed Central

    Helmer, Diane; Savoie, Isabelle; Green, Carolyn; Kazanjian, Arminée

    2001-01-01

    Background: Cochrane-style systematic reviews increasingly require the participation of librarians. Guidelines on the appropriate search strategy to use for systematic reviews have been proposed. However, research evidence supporting these recommendations is limited. Objective: This study investigates the effectiveness of various systematic search methods used to uncover randomized controlled trials (RCTs) for systematic reviews. Effectiveness is defined as the proportion of relevant material uncovered for the systematic review using extended systematic review search methods. The following extended systematic search methods are evaluated: searching subject-specific or specialized databases (including trial registries), hand searching, scanning reference lists, and communicating personally. Methods: Two systematic review projects were prospectively monitored regarding the method used to identify items as well as the type of items retrieved. The proportion of RCTs identified by each systematic search method was calculated. Results: The extended systematic search methods uncovered 29.2% of all items retrieved for the systematic reviews. The search of specialized databases was the most effective method, followed by scanning of reference lists, communicating personally, and hand searching. Although the number of items identified through hand searching was small, these unique items would otherwise have been missed. Conclusions: Extended systematic search methods are effective tools for uncovering material for the systematic review. The quality of the items uncovered has yet to be assessed and will be key in evaluating the value of the systematic search methods. PMID:11837256

  14. Reticular synthesis of porous molecular 1D nanotubes and 3D networks.

    PubMed

    Slater, A G; Little, M A; Pulido, A; Chong, S Y; Holden, D; Chen, L; Morgan, C; Wu, X; Cheng, G; Clowes, R; Briggs, M E; Hasell, T; Jelfs, K E; Day, G M; Cooper, A I

    2017-01-01

    Synthetic control over pore size and pore connectivity is the crowning achievement for porous metal-organic frameworks (MOFs). The same level of control has not been achieved for molecular crystals, which are not defined by strong, directional intermolecular coordination bonds. Hence, molecular crystallization is inherently less controllable than framework crystallization, and there are fewer examples of 'reticular synthesis', in which multiple building blocks can be assembled according to a common assembly motif. Here we apply a chiral recognition strategy to a new family of tubular covalent cages to create both 1D porous nanotubes and 3D diamondoid pillared porous networks. The diamondoid networks are analogous to MOFs prepared from tetrahedral metal nodes and linear ditopic organic linkers. The crystal structures can be rationalized by computational lattice-energy searches, which provide an in silico screening method to evaluate candidate molecular building blocks. These results are a blueprint for applying the 'node and strut' principles of reticular synthesis to molecular crystals.

  15. Reticular synthesis of porous molecular 1D nanotubes and 3D networks

    NASA Astrophysics Data System (ADS)

    Slater, A. G.; Little, M. A.; Pulido, A.; Chong, S. Y.; Holden, D.; Chen, L.; Morgan, C.; Wu, X.; Cheng, G.; Clowes, R.; Briggs, M. E.; Hasell, T.; Jelfs, K. E.; Day, G. M.; Cooper, A. I.

    2017-01-01

    Synthetic control over pore size and pore connectivity is the crowning achievement for porous metal-organic frameworks (MOFs). The same level of control has not been achieved for molecular crystals, which are not defined by strong, directional intermolecular coordination bonds. Hence, molecular crystallization is inherently less controllable than framework crystallization, and there are fewer examples of 'reticular synthesis', in which multiple building blocks can be assembled according to a common assembly motif. Here we apply a chiral recognition strategy to a new family of tubular covalent cages to create both 1D porous nanotubes and 3D diamondoid pillared porous networks. The diamondoid networks are analogous to MOFs prepared from tetrahedral metal nodes and linear ditopic organic linkers. The crystal structures can be rationalized by computational lattice-energy searches, which provide an in silico screening method to evaluate candidate molecular building blocks. These results are a blueprint for applying the 'node and strut' principles of reticular synthesis to molecular crystals.

  16. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    USGS Publications Warehouse

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  17. Anatomy and evolution of database search engines-a central component of mass spectrometry based proteomic workflows.

    PubMed

    Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2017-09-13

    Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.

  18. Model-based economic evaluation in Alzheimer's disease: a review of the methods available to model Alzheimer's disease progression.

    PubMed

    Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P

    2011-01-01

    To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  19. Understanding Unintended Consequences and Health Information Technology:. Contribution from the IMIA Organizational and Social Issues Working Group.

    PubMed

    Kuziemsky, C E; Randell, R; Borycki, E M

    2016-11-10

    No framework exists to identify and study unintended consequences (UICs) with a focus on organizational and social issues (OSIs). To address this shortcoming, we conducted a literature review to develop a framework for considering UICs and health information technology (HIT) from the perspective of OSIs. A literature review was conducted for the period 2000- 2015 using the search terms "unintended consequences" and "health information technology". 67 papers were screened, of which 18 met inclusion criteria. Data extraction was focused on the types of technologies studied, types of UICs identified, and methods of data collection and analysis used. A thematic analysis was used to identify themes related to UICs. We identified two overarching themes. One was the definition and terminology of how people classify and discuss UICs. Second was OSIs and UICs. For the OSI theme, we also identified four sub-themes: process change and evolution, individual-collaborative interchange, context of use, and approaches to model, study, and understand UICs. While there is a wide body of research on UICs, there is a lack of overall consensus on how they should be classified and reported, limiting our ability to understand the implications of UICs and how to manage them. More mixed-methods research and better proactive identification of UICs remain priorities. Our findings and framework of OSI considerations for studying UICs and HIT extend existing work on HIT and UICs by focusing on organizational and social issues.

  20. A mixed-method systematic review of the effectiveness and acceptability of preoperative psychological preparation programmes to reduce paediatric preoperative anxiety in elective surgery.

    PubMed

    Dai, Ying; Livesley, Joan

    2018-05-13

    To explore the effectiveness of preoperative psychological preparation programmes aimed to reduce paediatric preoperative anxiety and the potential factors that could have an impact on parent and children's acceptance of such interventions. Various preoperative psychological preparation programmes are available to address paediatric preoperative anxiety. No mixed-method review has been conducted to explore the effectiveness and acceptability of these programmes. A mixed-method systematic review. Seven bibliographic databases were searched from inception to September 2016, complemented by hand searching of key journals, the reference lists of relevant reviews, search for grey literature and the contacting of associated experts. The review process was conducted based on the framework developed by the Evidence for Policy and Practice Information and Co-ordinating Centre. A narrative summary and a thematic synthesis were developed to synthesise the quantitative and qualitative data respectively, followed by a third synthesis to combine the previous syntheses. Nineteen controlled trials and eleven qualitative studies were included for data synthesis. The controlled trials reveal that educational multimedia applications and web-based programmes may reduce paediatric preoperative anxiety, while the effectiveness of therapeutic play and books remains uncertain. Qualitative studies showed parent-child dyads seek different levels of information. Providing matched information provision to each parent and child, actively involving children and their parents and teaching them coping skills, may be the essential hallmarks of a successful preoperative psychological preparation. Further research is necessary to confirm the effectiveness of therapeutic play and books. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. The Society for Implementation Research Collaboration Instrument Review Project: a methodology to promote rigorous evaluation.

    PubMed

    Lewis, Cara C; Stanick, Cameo F; Martinez, Ruben G; Weiner, Bryan J; Kim, Mimi; Barwick, Melanie; Comtois, Katherine A

    2015-01-08

    Identification of psychometrically strong instruments for the field of implementation science is a high priority underscored in a recent National Institutes of Health working meeting (October 2013). Existing instrument reviews are limited in scope, methods, and findings. The Society for Implementation Research Collaboration Instrument Review Project's objectives address these limitations by identifying and applying a unique methodology to conduct a systematic and comprehensive review of quantitative instruments assessing constructs delineated in two of the field's most widely used frameworks, adopt a systematic search process (using standard search strings), and engage an international team of experts to assess the full range of psychometric criteria (reliability, construct and criterion validity). Although this work focuses on implementation of psychosocial interventions in mental health and health-care settings, the methodology and results will likely be useful across a broad spectrum of settings. This effort has culminated in a centralized online open-access repository of instruments depicting graphical head-to-head comparisons of their psychometric properties. This article describes the methodology and preliminary outcomes. The seven stages of the review, synthesis, and evaluation methodology include (1) setting the scope for the review, (2) identifying frameworks to organize and complete the review, (3) generating a search protocol for the literature review of constructs, (4) literature review of specific instruments, (5) development of an evidence-based assessment rating criteria, (6) data extraction and rating instrument quality by a task force of implementation experts to inform knowledge synthesis, and (7) the creation of a website repository. To date, this multi-faceted and collaborative search and synthesis methodology has identified over 420 instruments related to 34 constructs (total 48 including subconstructs) that are relevant to implementation science. Despite numerous constructs having greater than 20 available instruments, which implies saturation, preliminary results suggest that few instruments stem from gold standard development procedures. We anticipate identifying few high-quality, psychometrically sound instruments once our evidence-based assessment rating criteria have been applied. The results of this methodology may enhance the rigor of implementation science evaluations by systematically facilitating access to psychometrically validated instruments and identifying where further instrument development is needed.

  2. Evolving discriminators for querying video sequences

    NASA Astrophysics Data System (ADS)

    Iyengar, Giridharan; Lippman, Andrew B.

    1997-01-01

    In this paper we present a framework for content based query and retrieval of information from large video databases. This framework enables content based retrieval of video sequences by characterizing the sequences using motion, texture and colorimetry cues. This characterization is biologically inspired and results in a compact parameter space where every segment of video is represented by an 8 dimensional vector. Searching and retrieval is done in real- time with accuracy in this parameter space. Using this characterization, we then evolve a set of discriminators using Genetic Programming Experiments indicate that these discriminators are capable of analyzing and characterizing video. The VideoBook is able to search and retrieve video sequences with 92% accuracy in real-time. Experiments thus demonstrate that the characterization is capable of extracting higher level structure from raw pixel values.

  3. Understanding unemployed people's job search behaviour, unemployment experience and well-being: a comparison of expectancy-value theory and self-determination theory.

    PubMed

    Vansteenkiste, Maarten; Lens, Willy; De Witte, Hans; Feather, N T

    2005-06-01

    Previous unemployment research has directly tested hypotheses derived from expectancy-value theory (EVT; Feather, 1982, 1990), but no comparative analysis has been executed with another motivational framework. In one large study with 446 unemployed people, separate analyses provided good evidence for predictions derived from both EVT and self-determination theory (SDT; Deci & Ryan, 1985, 2000). Comparative analyses indicated that the type of people's job search motivation, as conceptualized through the notions of autonomous versus controlled motivation within SDT, is an important predictor of people's unemployment experience and wellbeing, beyond people's strength of motivation assessed within EVT through expectancies of finding a job and employment value. The importance of simultaneously testing two theoretical frameworks is discussed.

  4. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: a methodological review.

    PubMed

    Cooper, Chris; Booth, Andrew; Britten, Nicky; Garside, Ruth

    2017-11-28

    The purpose and contribution of supplementary search methods in systematic reviews is increasingly acknowledged. Numerous studies have demonstrated their potential in identifying studies or study data that would have been missed by bibliographic database searching alone. What is less certain is how supplementary search methods actually work, how they are applied, and the consequent advantages, disadvantages and resource implications of each search method. The aim of this study is to compare current practice in using supplementary search methods with methodological guidance. Four methodological handbooks in informing systematic review practice in the UK were read and audited to establish current methodological guidance. Studies evaluating the use of supplementary search methods were identified by searching five bibliographic databases. Studies were included if they (1) reported practical application of a supplementary search method (descriptive) or (2) examined the utility of a supplementary search method (analytical) or (3) identified/explored factors that impact on the utility of a supplementary method, when applied in practice. Thirty-five studies were included in this review in addition to the four methodological handbooks. Studies were published between 1989 and 2016, and dates of publication of the handbooks ranged from 1994 to 2014. Five supplementary search methods were reviewed: contacting study authors, citation chasing, handsearching, searching trial registers and web searching. There is reasonable consistency between recommended best practice (handbooks) and current practice (methodological studies) as it relates to the application of supplementary search methods. The methodological studies provide useful information on the effectiveness of the supplementary search methods, often seeking to evaluate aspects of the method to improve effectiveness or efficiency. In this way, the studies advance the understanding of the supplementary search methods. Further research is required, however, so that a rational choice can be made about which supplementary search strategies should be used, and when.

  5. A Simulation Study of Acoustic-Assisted Tracking of Whales for Mark-Recapture Surveys

    PubMed Central

    Peel, David; Miller, Brian S.; Kelly, Natalie; Dawson, Steve; Slooten, Elisabeth; Double, Michael C.

    2014-01-01

    Collecting enough data to obtain reasonable abundance estimates of whales is often difficult, particularly when studying rare species. Passive acoustics can be used to detect whale sounds and are increasingly used to estimate whale abundance. Much of the existing effort centres on the use of acoustics to estimate abundance directly, e.g. analysing detections in a distance sampling framework. Here, we focus on acoustics as a tool incorporated within mark-recapture surveys. In this context, acoustic tools are used to detect and track whales, which are then photographed or biopsied to provide data for mark-recapture analyses. The purpose of incorporating acoustics is to increase the encounter rate beyond using visual searching only. While this general approach is not new, its utility is rarely quantified. This paper predicts the “acoustically-assisted” encounter rate using a discrete-time individual-based simulation of whales and survey vessel. We validate the simulation framework using existing data from studies of sperm whales. We then use the framework to predict potential encounter rates in a study of Antarctic blue whales. We also investigate the effects of a number of the key parameters on encounter rate. Mean encounter rates from the simulation of sperm whales matched well with empirical data. Variance of encounter rate, however, was underestimated. The simulation of Antarctic blue whales found that passive acoustics should provide a 1.7–3.0 fold increase in encounter rate over visual-only methods. Encounter rate was most sensitive to acoustic detection range, followed by vocalisation rate. During survey planning and design, some indication of the relationship between expected sample size and effort is paramount; this simulation framework can be used to predict encounter rates and establish this relationship. For a case in point, the simulation framework indicates unequivocally that real-time acoustic tracking should be considered for quantifying the abundance of Antarctic blue whales via mark-recapture methods. PMID:24827919

  6. A simulation study of acoustic-assisted tracking of whales for mark-recapture surveys.

    PubMed

    Peel, David; Miller, Brian S; Kelly, Natalie; Dawson, Steve; Slooten, Elisabeth; Double, Michael C

    2014-01-01

    Collecting enough data to obtain reasonable abundance estimates of whales is often difficult, particularly when studying rare species. Passive acoustics can be used to detect whale sounds and are increasingly used to estimate whale abundance. Much of the existing effort centres on the use of acoustics to estimate abundance directly, e.g. analysing detections in a distance sampling framework. Here, we focus on acoustics as a tool incorporated within mark-recapture surveys. In this context, acoustic tools are used to detect and track whales, which are then photographed or biopsied to provide data for mark-recapture analyses. The purpose of incorporating acoustics is to increase the encounter rate beyond using visual searching only. While this general approach is not new, its utility is rarely quantified. This paper predicts the "acoustically-assisted" encounter rate using a discrete-time individual-based simulation of whales and survey vessel. We validate the simulation framework using existing data from studies of sperm whales. We then use the framework to predict potential encounter rates in a study of Antarctic blue whales. We also investigate the effects of a number of the key parameters on encounter rate. Mean encounter rates from the simulation of sperm whales matched well with empirical data. Variance of encounter rate, however, was underestimated. The simulation of Antarctic blue whales found that passive acoustics should provide a 1.7-3.0 fold increase in encounter rate over visual-only methods. Encounter rate was most sensitive to acoustic detection range, followed by vocalisation rate. During survey planning and design, some indication of the relationship between expected sample size and effort is paramount; this simulation framework can be used to predict encounter rates and establish this relationship. For a case in point, the simulation framework indicates unequivocally that real-time acoustic tracking should be considered for quantifying the abundance of Antarctic blue whales via mark-recapture methods.

  7. What drives political commitment for nutrition? A review and framework synthesis to inform the United Nations Decade of Action on Nutrition

    PubMed Central

    Baker, Phillip; Hawkes, Corinna; Wingrove, Kate; Parkhurst, Justin; Thow, Anne Marie; Walls, Helen

    2018-01-01

    Introduction Generating country-level political commitment will be critical to driving forward action throughout the United Nations Decade of Action on Nutrition (2016–2025). In this review of the empirical nutrition policy literature, we ask: what factors generate, sustain and constrain political commitment for nutrition, how and under what circumstances? Our aim is to inform strategic ‘commitment-building’ actions. Method We adopted a framework synthesis method and realist review protocol. An initial framework was derived from relevant theory and then populated with empirical evidence to test and modify it. Five steps were undertaken: initial theoretical framework development; search for relevant empirical literature; study selection and quality appraisal; data extraction, analysis and synthesis and framework modification. Results 75 studies were included. We identified 18 factors that drive commitment, organised into five categories: actors; institutions; political and societal contexts; knowledge, evidence and framing; and, capacities and resources. Irrespective of country-context, effective nutrition actor networks, strong leadership, civil society mobilisation, supportive political administrations, societal change and focusing events, cohesive and resonant framing, and robust data systems and available evidence were commitment drivers. Low-income and middle-income country studies also frequently reported international actors, empowered institutions, vertical coordination and capacities and resources. In upper-middle-income and high-income country studies, private sector interference frequently undermined commitment. Conclusion Political commitment is not something that simply exists or emerges accidentally; it can be created and strengthened over time through strategic action. Successfully generating commitment will likely require a core set of actions with some context-dependent adaptations. Ultimately, it will necessitate strategic actions by cohesive, resourced and strongly led nutrition actor networks that are responsive to the multifactorial, multilevel and dynamic political systems in which they operate and attempt to influence. Accelerating the formation and effectiveness of such networks over the Nutrition Decade should be a core task for all actors involved. PMID:29527338

  8. Automated detection of hospital outbreaks: A systematic review of methods

    PubMed Central

    Buckeridge, David L.; Lepelletier, Didier

    2017-01-01

    Objectives Several automated algorithms for epidemiological surveillance in hospitals have been proposed. However, the usefulness of these methods to detect nosocomial outbreaks remains unclear. The goal of this review was to describe outbreak detection algorithms that have been tested within hospitals, consider how they were evaluated, and synthesize their results. Methods We developed a search query using keywords associated with hospital outbreak detection and searched the MEDLINE database. To ensure the highest sensitivity, no limitations were initially imposed on publication languages and dates, although we subsequently excluded studies published before 2000. Every study that described a method to detect outbreaks within hospitals was included, without any exclusion based on study design. Additional studies were identified through citations in retrieved studies. Results Twenty-nine studies were included. The detection algorithms were grouped into 5 categories: simple thresholds (n = 6), statistical process control (n = 12), scan statistics (n = 6), traditional statistical models (n = 6), and data mining methods (n = 4). The evaluation of the algorithms was often solely descriptive (n = 15), but more complex epidemiological criteria were also investigated (n = 10). The performance measures varied widely between studies: e.g., the sensitivity of an algorithm in a real world setting could vary between 17 and 100%. Conclusion Even if outbreak detection algorithms are useful complementary tools for traditional surveillance, the heterogeneity in results among published studies does not support quantitative synthesis of their performance. A standardized framework should be followed when evaluating outbreak detection methods to allow comparison of algorithms across studies and synthesis of results. PMID:28441422

  9. Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance

    DTIC Science & Technology

    2003-07-21

    Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance Vincent A. Cicirello CMU-RI-TR-03-27 Submitted in partial fulfillment...AND SUBTITLE Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...lead to the development of a search control framework, called QD-BEACON that uses online -generated statistical models of search performance to

  10. Issues and solutions for storage, retrieval, and searching of MPEG-7 documents

    NASA Astrophysics Data System (ADS)

    Chang, Yuan-Chi; Lo, Ming-Ling; Smith, John R.

    2000-10-01

    The ongoing MPEG-7 standardization activity aims at creating a standard for describing multimedia content in order to facilitate the interpretation of the associated information content. Attempting to address a broad range of applications, MPEG-7 has defined a flexible framework consisting of Descriptors, Description Schemes, and Description Definition Language. Descriptors and Description Schemes describe features, structure and semantics of multimedia objects. They are written in the Description Definition Language (DDL). In the most recent revision, DDL applies XML (Extensible Markup Language) Schema with MPEG-7 extensions. DDL has constructs that support inclusion, inheritance, reference, enumeration, choice, sequence, and abstract type of Description Schemes and Descriptors. In order to enable multimedia systems to use MPEG-7, a number of important problems in storing, retrieving and searching MPEG-7 documents need to be solved. This paper reports on initial finding on issues and solutions of storing and accessing MPEG-7 documents. In particular, we discuss the benefits of using a virtual document management framework based on XML Access Server (XAS) in order to bridge the MPEG-7 multimedia applications and database systems. The need arises partly because MPEG-7 descriptions need customized storage schema, indexing and search engines. We also discuss issues arising in managing dependence and cross-description scheme search.

  11. The dimensions of nursing surveillance: a concept analysis.

    PubMed

    Kelly, Lesly; Vincent, Deborah

    2011-03-01

    This paper is a report of an analysis of the concept of nursing surveillance. Nursing surveillance, a primary function of acute care nurses, is critical to patient safety and outcomes. Although it has been associated with patient outcomes and organizational context of care, little knowledge has been generated about the conceptual and operational process of surveillance. A search using the CINAHL, Medline and PubMed databases was used to compile an international data set of 18 papers and 4 book chapters published from 1985 to 2009. Rodger's evolutionary concept analysis techniques were used to analyse surveillance in a systems framework. This method focused the search to nursing surveillance (as opposed to other medical uses of the term) and used a theoretical framework to guide the analysis. The examination of the literature clarifies the multifaceted nature of nursing surveillance in the acute care setting. Surveillance involves purposeful and ongoing acquisition, interpretation and synthesis of patient data for clinical decision-making. Behavioural activities and multiple cognitive processes are used in surveillance in order for the nurse to make decisions for patient safety and health maintenance. A systems approach to the analysis also demonstrates how organizational characteristics and contextual factors influence the process in the acute care environment. This conceptual analysis describes the nature of the surveillance process and clarifies the concept for effective communication and future use in health services research. © 2010 The Authors. Journal of Advanced Nursing © 2010 Blackwell Publishing Ltd.

  12. Community accountability at peripheral health facilities: a review of the empirical literature and development of a conceptual framework

    PubMed Central

    Molyneux, Sassy; Atela, Martin; Angwenyi, Vibian; Goodman, Catherine

    2012-01-01

    Public accountability has re-emerged as a top priority for health systems all over the world, and particularly in developing countries where governments have often failed to provide adequate public sector services for their citizens. One approach to strengthening public accountability is through direct involvement of clients, users or the general public in health delivery, here termed ‘community accountability’. The potential benefits of community accountability, both as an end in itself and as a means of improving health services, have led to significant resources being invested by governments and non-governmental organizations. Data are now needed on the implementation and impact of these initiatives on the ground. A search of PubMed using a systematic approach, supplemented by a hand search of key websites, identified 21 papers from low- or middle-income countries describing at least one measure to enhance community accountability that was linked with peripheral facilities. Mechanisms covered included committees and groups (n = 19), public report cards (n = 1) and patients’ rights charters (n = 1). In this paper we summarize the data presented in these papers, including impact, and factors influencing impact, and conclude by commenting on the methods used, and the issues they raise. We highlight that the international interest in community accountability mechanisms linked to peripheral facilities has not been matched by empirical data, and present a conceptual framework and a set of ideas that might contribute to future studies. PMID:22279082

  13. Online chemical modeling environment (OCHEM): web platform for data storage, model development and publishing of chemical information

    NASA Astrophysics Data System (ADS)

    Sushko, Iurii; Novotarskyi, Sergii; Körner, Robert; Pandey, Anil Kumar; Rupp, Matthias; Teetz, Wolfram; Brandmaier, Stefan; Abdelaziz, Ahmed; Prokopenko, Volodymyr V.; Tanchuk, Vsevolod Y.; Todeschini, Roberto; Varnek, Alexandre; Marcou, Gilles; Ertl, Peter; Potemkin, Vladimir; Grishina, Maria; Gasteiger, Johann; Schwab, Christof; Baskin, Igor I.; Palyulin, Vladimir A.; Radchenko, Eugene V.; Welsh, William J.; Kholodovych, Vladyslav; Chekmarev, Dmitriy; Cherkasov, Artem; Aires-de-Sousa, Joao; Zhang, Qing-You; Bender, Andreas; Nigsch, Florian; Patiny, Luc; Williams, Antony; Tkachenko, Valery; Tetko, Igor V.

    2011-06-01

    The Online Chemical Modeling Environment is a web-based platform that aims to automate and simplify the typical steps required for QSAR modeling. The platform consists of two major subsystems: the database of experimental measurements and the modeling framework. A user-contributed database contains a set of tools for easy input, search and modification of thousands of records. The OCHEM database is based on the wiki principle and focuses primarily on the quality and verifiability of the data. The database is tightly integrated with the modeling framework, which supports all the steps required to create a predictive model: data search, calculation and selection of a vast variety of molecular descriptors, application of machine learning methods, validation, analysis of the model and assessment of the applicability domain. As compared to other similar systems, OCHEM is not intended to re-implement the existing tools or models but rather to invite the original authors to contribute their results, make them publicly available, share them with other users and to become members of the growing research community. Our intention is to make OCHEM a widely used platform to perform the QSPR/QSAR studies online and share it with other users on the Web. The ultimate goal of OCHEM is collecting all possible chemoinformatics tools within one simple, reliable and user-friendly resource. The OCHEM is free for web users and it is available online at http://www.ochem.eu.

  14. Technical skills assessment toolbox: a review using the unitary framework of validity.

    PubMed

    Ghaderi, Iman; Manji, Farouq; Park, Yoon Soo; Juul, Dorthea; Ott, Michael; Harris, Ilene; Farrell, Timothy M

    2015-02-01

    The purpose of this study was to create a technical skills assessment toolbox for 35 basic and advanced skills/procedures that comprise the American College of Surgeons (ACS)/Association of Program Directors in Surgery (APDS) surgical skills curriculum and to provide a critical appraisal of the included tools, using contemporary framework of validity. Competency-based training has become the predominant model in surgical education and assessment of performance is an essential component. Assessment methods must produce valid results to accurately determine the level of competency. A search was performed, using PubMed and Google Scholar, to identify tools that have been developed for assessment of the targeted technical skills. A total of 23 assessment tools for the 35 ACS/APDS skills modules were identified. Some tools, such as Operative Performance Rating System (OSATS) and Objective Structured Assessment of Technical Skill (OPRS), have been tested for more than 1 procedure. Therefore, 30 modules had at least 1 assessment tool, with some common surgical procedures being addressed by several tools. Five modules had none. Only 3 studies used Messick's framework to design their validity studies. The remaining studies used an outdated framework on the basis of "types of validity." When analyzed using the contemporary framework, few of these studies demonstrated validity for content, internal structure, and relationship to other variables. This study provides an assessment toolbox for common surgical skills/procedures. Our review shows that few authors have used the contemporary unitary concept of validity for development of their assessment tools. As we progress toward competency-based training, future studies should provide evidence for various sources of validity using the contemporary framework.

  15. [Current considerations around the search for extraterrestrial life].

    PubMed

    González de Posada, F

    2000-01-01

    In this paper, the current cosmological topics are considered: a) The fourth centenary celebration of Giordano Bruno's death at the Roman's inquisition stake. This eminent philosopher, based on the Coppernican Revolution, concibed the Cosmos as a infinite universe with innumerable inhabited worlds. He acted on reason to believe not only in extraterrestrial life but in extraterrestrial intelligent life. Here we write a few words in his memory and honour. b) The active project SETI@home in the framework of today's classic program "Search for Extra-Terrestrial Intelligence", by means of the reception of radioelectrical signals. c) Search for extrasolar planets.

  16. Disseminating research findings: what should researchers do? A systematic scoping review of conceptual frameworks

    PubMed Central

    2010-01-01

    Background Addressing deficiencies in the dissemination and transfer of research-based knowledge into routine clinical practice is high on the policy agenda both in the UK and internationally. However, there is lack of clarity between funding agencies as to what represents dissemination. Moreover, the expectations and guidance provided to researchers vary from one agency to another. Against this background, we performed a systematic scoping to identify and describe any conceptual/organising frameworks that could be used by researchers to guide their dissemination activity. Methods We searched twelve electronic databases (including MEDLINE, EMBASE, CINAHL, and PsycINFO), the reference lists of included studies and of individual funding agency websites to identify potential studies for inclusion. To be included, papers had to present an explicit framework or plan either designed for use by researchers or that could be used to guide dissemination activity. Papers which mentioned dissemination (but did not provide any detail) in the context of a wider knowledge translation framework, were excluded. References were screened independently by at least two reviewers; disagreements were resolved by discussion. For each included paper, the source, the date of publication, a description of the main elements of the framework, and whether there was any implicit/explicit reference to theory were extracted. A narrative synthesis was undertaken. Results Thirty-three frameworks met our inclusion criteria, 20 of which were designed to be used by researchers to guide their dissemination activities. Twenty-eight included frameworks were underpinned at least in part by one or more of three different theoretical approaches, namely persuasive communication, diffusion of innovations theory, and social marketing. Conclusions There are currently a number of theoretically-informed frameworks available to researchers that can be used to help guide their dissemination planning and activity. Given the current emphasis on enhancing the uptake of knowledge about the effects of interventions into routine practice, funders could consider encouraging researchers to adopt a theoretically-informed approach to their research dissemination. PMID:21092164

  17. Computationally mapping sequence space to understand evolutionary protein engineering.

    PubMed

    Armstrong, Kathryn A; Tidor, Bruce

    2008-01-01

    Evolutionary protein engineering has been dramatically successful, producing a wide variety of new proteins with altered stability, binding affinity, and enzymatic activity. However, the success of such procedures is often unreliable, and the impact of the choice of protein, engineering goal, and evolutionary procedure is not well understood. We have created a framework for understanding aspects of the protein engineering process by computationally mapping regions of feasible sequence space for three small proteins using structure-based design protocols. We then tested the ability of different evolutionary search strategies to explore these sequence spaces. The results point to a non-intuitive relationship between the error-prone PCR mutation rate and the number of rounds of replication. The evolutionary relationships among feasible sequences reveal hub-like sequences that serve as particularly fruitful starting sequences for evolutionary search. Moreover, genetic recombination procedures were examined, and tradeoffs relating sequence diversity and search efficiency were identified. This framework allows us to consider the impact of protein structure on the allowed sequence space and therefore on the challenges that each protein presents to error-prone PCR and genetic recombination procedures.

  18. Evaluating ecommerce websites cognitive efficiency: an integrative framework based on data envelopment analysis.

    PubMed

    Lo Storto, Corrado

    2013-11-01

    This paper presents an integrative framework to evaluate ecommerce website efficiency from the user viewpoint using Data Envelopment Analysis (DEA). This framework is inspired by concepts driven from theories of information processing and cognition and considers the website efficiency as a measure of its quality and performance. When the users interact with the website interfaces to perform a task, they are involved in a cognitive effort, sustaining a cognitive cost to search, interpret and process information, and experiencing either a sense of satisfaction or dissatisfaction for that. The amount of ambiguity and uncertainty, and the search (over-)time during navigation that they perceive determine the effort size - and, as a consequence, the cognitive cost amount - they have to bear to perform their task. On the contrary, task performing and result achievement provide the users with cognitive benefits, making interaction with the website potentially attractive, satisfying, and useful. In total, 9 variables are measured, classified in a set of 3 website macro-dimensions (user experience, site navigability and structure). The framework is implemented to compare 52 ecommerce websites that sell products in the information technology and media market. A stepwise regression is performed to assess the influence of cognitive costs and benefits that mostly affect website efficiency. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  19. Effective Filtering of Query Results on Updated User Behavioral Profiles in Web Mining

    PubMed Central

    Sadesh, S.; Suganthe, R. C.

    2015-01-01

    Web with tremendous volume of information retrieves result for user related queries. With the rapid growth of web page recommendation, results retrieved based on data mining techniques did not offer higher performance filtering rate because relationships between user profile and queries were not analyzed in an extensive manner. At the same time, existing user profile based prediction in web data mining is not exhaustive in producing personalized result rate. To improve the query result rate on dynamics of user behavior over time, Hamilton Filtered Regime Switching User Query Probability (HFRS-UQP) framework is proposed. HFRS-UQP framework is split into two processes, where filtering and switching are carried out. The data mining based filtering in our research work uses the Hamilton Filtering framework to filter user result based on personalized information on automatic updated profiles through search engine. Maximized result is fetched, that is, filtered out with respect to user behavior profiles. The switching performs accurate filtering updated profiles using regime switching. The updating in profile change (i.e., switches) regime in HFRS-UQP framework identifies the second- and higher-order association of query result on the updated profiles. Experiment is conducted on factors such as personalized information search retrieval rate, filtering efficiency, and precision ratio. PMID:26221626

  20. How far are we from full implementation of health promoting workplace concepts? A review of implementation tools and frameworks in workplace interventions.

    PubMed

    Motalebi G, Masoud; Keshavarz Mohammadi, Nastaran; Kuhn, Karl; Ramezankhani, Ali; Azari, Mansour R

    2018-06-01

    Health promoting workplace frameworks provide a holistic view on determinants of workplace health and the link between individuals, work and environment, however, the operationalization of these frameworks has not been very clear. This study provides a typology of the different understandings, frameworks/tools used in the workplace health promotion practice or research worldwide. It discusses the degree of their conformity with Ottawa Charter's spirit and the key actions expected to be implemented in health promoting settings such as workplaces. A comprehensive online search was conducted utilizing relevant key words. The search also included official websites of related international, regional, and national organizations. After exclusion, 27 texts were analysed utilizing conventional content analyses. The results of the analysis were categorized as dimensions (level or main structure) of a healthy or health promoting workplaces and subcategorized characteristics/criteria of healthy/health promoting workplace. Our analysis shows diversity and ambiguity in the workplace health literature regarding domains and characteristics of a healthy/health promoting workplace. This may have roots in lack of a common understanding of the concepts or different social and work environment context. Development of global or national health promoting workplace standards in a participatory process might be considered as a potential solution.

Top