DOE Office of Scientific and Technical Information (OSTI.GOV)
Quock, D. E. R.; Cianciarulo, M. B.; APS Engineering Support Division
2007-01-01
The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, themore » necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.« less
Applying New Diabetes Teaching Tools in Health-Related Extension Programming
ERIC Educational Resources Information Center
Grenci, Alexandra
2010-01-01
In response to the emerging global diabetes epidemic, health educators are searching for new and better education tools to help people make positive behavior changes to successfully prevent or manage diabetes. Conversation Maps[R] are new learner-driven education tools that have been developed to empower individuals to improve their health…
Services Contact Site Map Go Global Land Survey(GLS) Data Access ESDI Download via Search and Preview Tool gls The Global Land Survey (GLS) collection of Landsat imagery is designed to meet a need from Availability: GLS 1975, 1990, 2000, and 2005 are available to the public for free at the US Geological Survey
and reload this page. Skip over global navigation links U.S. Department of Health and Human Services Health Emergency - Leading a Nation Prepared Search Search Right Box1 Content 2017 Hurricane Response and Deaf or Hard of Hearing For Public Health Communicators and the Media Tools for the Media and Public
Doherty, S; Oram, S; Siriwardhana, C; Abas, M
2016-05-01
Trafficking is a global human rights violation with multiple and complex mental health consequences. Valid and reliable mental health assessment tools are needed to inform health-care provision. We reviewed mental health assessment tools used in research with men and women trafficked for sexual and labour exploitation. We searched nine electronic databases (PsycINFO, Ovid Medline, PubMed, Embase, Assia, the Web of Science, Global Health, Google Scholar, and Open Grey) and hand-searched the reference lists of relevant identified studies. Seven studies were included in this Review. Six of the studies screened for post-traumatic stress disorder, depression, and anxiety; one study screened for harmful use or abuse of alcohol and used a diagnostic tool to assess post-traumatic stress disorder, depression, and anxiety. Two studies included men in their sample population. Although the reported prevalence of mental health problems was high, little information was provided about the validity, reliability, and cultural appropriateness of assessment tools. Further research is needed to determine which assessment tools are culturally appropriate, valid, and reliable for trafficked people. Copyright © 2016 Elsevier Ltd. All rights reserved.
Assessing teamwork performance in obstetrics: A systematic search and review of validated tools.
Fransen, Annemarie F; de Boer, Liza; Kienhorst, Dieneke; Truijens, Sophie E; van Runnard Heimel, Pieter J; Oei, S Guid
2017-09-01
Teamwork performance is an essential component for the clinical efficiency of multi-professional teams in obstetric care. As patient safety is related to teamwork performance, it has become an important learning goal in simulation-based education. In order to improve teamwork performance, reliable assessment tools are required. These can be used to provide feedback during training courses, or to compare learning effects between different types of training courses. The aim of the current study is to (1) identify the available assessment tools to evaluate obstetric teamwork performance in a simulated environment, and (2) evaluate their psychometric properties in order to identify the most valuable tool(s) to use. We performed a systematic search in PubMed, MEDLINE, and EMBASE to identify articles describing assessment tools for the evaluation of obstetric teamwork performance in a simulated environment. In order to evaluate the quality of the identified assessment tools the standards and grading rules have been applied as recommended by the Accreditation Council for Graduate Medical Education (ACGME) Committee on Educational Outcomes. The included studies were also assessed according to the Oxford Centre for Evidence Based Medicine (OCEBM) levels of evidence. This search resulted in the inclusion of five articles describing the following six tools: Clinical Teamwork Scale, Human Factors Rating Scale, Global Rating Scale, Assessment of Obstetric Team Performance, Global Assessment of Obstetric Team Performance, and the Teamwork Measurement Tool. Based on the ACGME guidelines we assigned a Class 3, level C of evidence, to all tools. Regarding the OCEBM levels of evidence, a level 3b was assigned to two studies and a level 4 to four studies. The Clinical Teamwork Scale demonstrated the most comprehensive validation, and the Teamwork Measurement Tool demonstrated promising results, however it is recommended to further investigate its reliability. Copyright © 2017. Published by Elsevier B.V.
Teamwork Assessment Tools in Obstetric Emergencies: A Systematic Review.
Onwochei, Desire N; Halpern, Stephen; Balki, Mrinalini
2017-06-01
Team-based training and simulation can improve patient safety, by improving communication, decision making, and performance of team members. Currently, there is no general consensus on whether or not a specific assessment tool is better adapted to evaluate teamwork in obstetric emergencies. The purpose of this qualitative systematic review was to find the tools available to assess team effectiveness in obstetric emergencies. We searched Embase, Medline, PubMed, Web of Science, PsycINFO, CINAHL, and Google Scholar for prospective studies that evaluated nontechnical skills in multidisciplinary teams involving obstetric emergencies. The search included studies from 1944 until January 11, 2016. Data on reliability and validity measures were collected and used for interpretation. A descriptive analysis was performed on the data. Thirteen studies were included in the final qualitative synthesis. All the studies assessed teams in the context of obstetric simulation scenarios, but only six included anesthetists in the simulations. One study evaluated their teamwork tool using just validity measures, five using just reliability measures, and one used both. The most reliable tools identified were the Clinical Teamwork Scale, the Global Assessment of Obstetric Team Performance, and the Global Rating Scale of performance. However, they were still lacking in terms of quality and validity. More work needs to be conducted to establish the validity of teamwork tools for nontechnical skills, and the development of an ideal tool is warranted. Further studies are required to assess how outcomes, such as performance and patient safety, are influenced when using these tools.
ERIC Educational Resources Information Center
Italia, Nadia; Rehfuess, Eva A.
2012-01-01
Exposure to ultraviolet radiation is an important risk factor for skin cancer. The Global Solar Ultraviolet Index (UVI) was developed as a tool to visualize the amount of harmful radiation and to encourage people to use sun protection. We conducted a systematic review of the effectiveness of the UVI. We employed a comprehensive search strategy to…
Kann, Maricel G.; Sheetlin, Sergey L.; Park, Yonil; Bryant, Stephen H.; Spouge, John L.
2007-01-01
The sequencing of complete genomes has created a pressing need for automated annotation of gene function. Because domains are the basic units of protein function and evolution, a gene can be annotated from a domain database by aligning domains to the corresponding protein sequence. Ideally, complete domains are aligned to protein subsequences, in a ‘semi-global alignment’. Local alignment, which aligns pieces of domains to subsequences, is common in high-throughput annotation applications, however. It is a mature technique, with the heuristics and accurate E-values required for screening large databases and evaluating the screening results. Hidden Markov models (HMMs) provide an alternative theoretical framework for semi-global alignment, but their use is limited because they lack heuristic acceleration and accurate E-values. Our new tool, GLOBAL, overcomes some limitations of previous semi-global HMMs: it has accurate E-values and the possibility of the heuristic acceleration required for high-throughput applications. Moreover, according to a standard of truth based on protein structure, two semi-global HMM alignment tools (GLOBAL and HMMer) had comparable performance in identifying complete domains, but distinctly outperformed two tools based on local alignment. When searching for complete protein domains, therefore, GLOBAL avoids disadvantages commonly associated with HMMs, yet maintains their superior retrieval performance. PMID:17596268
CLAST: CUDA implemented large-scale alignment search tool.
Yano, Masahiro; Mori, Hiroshi; Akiyama, Yutaka; Yamada, Takuji; Kurokawa, Ken
2014-12-11
Metagenomics is a powerful methodology to study microbial communities, but it is highly dependent on nucleotide sequence similarity searching against sequence databases. Metagenomic analyses with next-generation sequencing technologies produce enormous numbers of reads from microbial communities, and many reads are derived from microbes whose genomes have not yet been sequenced, limiting the usefulness of existing sequence similarity search tools. Therefore, there is a clear need for a sequence similarity search tool that can rapidly detect weak similarity in large datasets. We developed a tool, which we named CLAST (CUDA implemented large-scale alignment search tool), that enables analyses of millions of reads and thousands of reference genome sequences, and runs on NVIDIA Fermi architecture graphics processing units. CLAST has four main advantages over existing alignment tools. First, CLAST was capable of identifying sequence similarities ~80.8 times faster than BLAST and 9.6 times faster than BLAT. Second, CLAST executes global alignment as the default (local alignment is also an option), enabling CLAST to assign reads to taxonomic and functional groups based on evolutionarily distant nucleotide sequences with high accuracy. Third, CLAST does not need a preprocessed sequence database like Burrows-Wheeler Transform-based tools, and this enables CLAST to incorporate large, frequently updated sequence databases. Fourth, CLAST requires <2 GB of main memory, making it possible to run CLAST on a standard desktop computer or server node. CLAST achieved very high speed (similar to the Burrows-Wheeler Transform-based Bowtie 2 for long reads) and sensitivity (equal to BLAST, BLAT, and FR-HIT) without the need for extensive database preprocessing or a specialized computing platform. Our results demonstrate that CLAST has the potential to be one of the most powerful and realistic approaches to analyze the massive amount of sequence data from next-generation sequencing technologies.
Metabolomics for secondary metabolite research.
Breitling, Rainer; Ceniceros, Ana; Jankevics, Andris; Takano, Eriko
2013-11-11
Metabolomics, the global characterization of metabolite profiles, is becoming an increasingly powerful tool for research on secondary metabolite discovery and production. In this review we discuss examples of recent technological advances and biological applications of metabolomics in the search for chemical novelty and the engineered production of bioactive secondary metabolites.
Smithsonian Folkways: Resources for World and Folk Music Multimedia
ERIC Educational Resources Information Center
Beegle, Amy Christine
2012-01-01
This column describes multimedia resources available to teachers on the Smithsonian Folkways website. In addition to massive collections of audio and video recordings and advanced search tools already available through this website, the Smithsonian Global Sound educational initiative brought detailed lesson plans and interactive features to the…
Software project management tools in global software development: a systematic mapping study.
Chadli, Saad Yasser; Idri, Ali; Ros, Joaquín Nicolás; Fernández-Alemán, José Luis; de Gea, Juan M Carrillo; Toval, Ambrosio
2016-01-01
Global software development (GSD) which is a growing trend in the software industry is characterized by a highly distributed environment. Performing software project management (SPM) in such conditions implies the need to overcome new limitations resulting from cultural, temporal and geographic separation. The aim of this research is to discover and classify the various tools mentioned in literature that provide GSD project managers with support and to identify in what way they support group interaction. A systematic mapping study has been performed by means of automatic searches in five sources. We have then synthesized the data extracted and presented the results of this study. A total of 102 tools were identified as being used in SPM activities in GSD. We have classified these tools, according to the software life cycle process on which they focus and how they support the 3C collaboration model (communication, coordination and cooperation). The majority of the tools found are standalone tools (77%). A small number of platforms (8%) also offer a set of interacting tools that cover the software development lifecycle. Results also indicate that SPM areas in GSD are not adequately supported by corresponding tools and deserve more attention from tool builders.
Patthi, Basavaraj; Kumar, Jishnu Krishna; Singla, Ashish; Gupta, Ritu; Prasad, Monika; Ali, Irfan; Dhama, Kuldeep; Niraj, Lav Kumar
2017-09-01
Oral diseases are pandemic cause of morbidity with widespread geographic distribution. This technology based era has brought about easy knowledge transfer than traditional dependency on information obtained from family doctors. Hence, harvesting this system of trends can aid in oral disease quantification. To conduct an exploratory analysis of the changes in internet search volumes of oral diseases by using Google Trends © (GT © ). GT © were utilized to provide real world facts based on search terms related to categories, interest by region and interest over time. Time period chosen was from January 2004 to December 2016. Five different search terms were explored and compared based on the highest relative search volumes along with comma separated value files to obtain an insight into highest search traffic. The search volume measured over the time span noted the term "Dental caries" to be the most searched in Japan, "Gingivitis" in Jordan, "Oral Cancer" in Taiwan, "No Teeth" in Australia, "HIV symptoms" in Zimbabwe, "Broken Teeth" in United Kingdom, "Cleft palate" in Philippines, "Toothache" in Indonesia and the comparison of top five searched terms provided the "Gingivitis" with highest search volume. The results from the present study offers an insight into a competent tool that can analyse and compare oral diseases over time. The trend research platform can be used on emerging diseases and their drift in geographic population with great acumen. This tool can be utilized in forecasting, modulating marketing strategies and planning disability limitation techniques.
Googling "Deaf": Deafness in the World's English-Language Press
ERIC Educational Resources Information Center
Power, Des
2007-01-01
An Internet search tool, Google Alert, was used to survey the global English-language press July-December 2005 for references to deaf people. The survey found that such references focus on people who are deaf rather than the disability itself, thus demonstrating how well deaf people fit into the mainstream. Derogatory terminology such as "deaf and…
Global Search Trends of Oral Problems using Google Trends from 2004 to 2016: An Exploratory Analysis
Patthi, Basavaraj; Singla, Ashish; Gupta, Ritu; Prasad, Monika; Ali, Irfan; Dhama, Kuldeep; Niraj, Lav Kumar
2017-01-01
Introduction Oral diseases are pandemic cause of morbidity with widespread geographic distribution. This technology based era has brought about easy knowledge transfer than traditional dependency on information obtained from family doctors. Hence, harvesting this system of trends can aid in oral disease quantification. Aim To conduct an exploratory analysis of the changes in internet search volumes of oral diseases by using Google Trends© (GT©). Materials and Methods GT© were utilized to provide real world facts based on search terms related to categories, interest by region and interest over time. Time period chosen was from January 2004 to December 2016. Five different search terms were explored and compared based on the highest relative search volumes along with comma separated value files to obtain an insight into highest search traffic. Results The search volume measured over the time span noted the term “Dental caries” to be the most searched in Japan, “Gingivitis” in Jordan, “Oral Cancer” in Taiwan, “No Teeth” in Australia, “HIV symptoms” in Zimbabwe, “Broken Teeth” in United Kingdom, “Cleft palate” in Philippines, “Toothache” in Indonesia and the comparison of top five searched terms provided the “Gingivitis” with highest search volume. Conclusion The results from the present study offers an insight into a competent tool that can analyse and compare oral diseases over time. The trend research platform can be used on emerging diseases and their drift in geographic population with great acumen. This tool can be utilized in forecasting, modulating marketing strategies and planning disability limitation techniques. PMID:29207825
How natural hazards influence Internet searches
NASA Astrophysics Data System (ADS)
Geyer, Adelina; Martí, Joan; Villaseñor, Antonio
2017-04-01
Effective dissemination of correct and easy-to-understand scientific information is one of the most imperative tasks of natural hazard assessment and risk management, being the media and the population the two fundamental groups of receptors. It has been observed how during the occurrence of hazardous natural phenomena, media and population desperately seek for information in all possible channels. Traditionally, these have been the radio and television, but over the past decades, the Internet has also become a significant information resource. Nevertheless, how the Internet search behavior changes during the occurrence of natural phenomena of significant societal impact (i.e. involving important human and/or economic losses) has never been analyzed so far. Focusing mainly on volcanism, we use here for the first time Internet search data provided by Google Trends to examine the search patterns of volcanology-related terms and how these may change during unrest periods or volcanic crises. Results obtained allow us to evaluate, at a global and local scale, the interest of society towards volcanological phenomena and its potential background knowledge of Earth Sciences. We show here how Internet search data turns to be a promising tool for the global and local monitoring of awareness and education background of society on natural phenomena in general, and volcanic hazards in particular.
Moy, Kyle; Li, Weiyu; Tran, Huu Phuoc; Simonis, Valerie; Story, Evan; Brandon, Christopher; Furst, Jacob; Raicu, Daniela; Kim, Hongkyun
2015-01-01
The nematode Caenorhabditis elegans provides a unique opportunity to interrogate the neural basis of behavior at single neuron resolution. In C. elegans, neural circuits that control behaviors can be formulated based on its complete neural connection map, and easily assessed by applying advanced genetic tools that allow for modulation in the activity of specific neurons. Importantly, C. elegans exhibits several elaborate behaviors that can be empirically quantified and analyzed, thus providing a means to assess the contribution of specific neural circuits to behavioral output. Particularly, locomotory behavior can be recorded and analyzed with computational and mathematical tools. Here, we describe a robust single worm-tracking system, which is based on the open-source Python programming language, and an analysis system, which implements path-related algorithms. Our tracking system was designed to accommodate worms that explore a large area with frequent turns and reversals at high speeds. As a proof of principle, we used our tracker to record the movements of wild-type animals that were freshly removed from abundant bacterial food, and determined how wild-type animals change locomotory behavior over a long period of time. Consistent with previous findings, we observed that wild-type animals show a transition from area-restricted local search to global search over time. Intriguingly, we found that wild-type animals initially exhibit short, random movements interrupted by infrequent long trajectories. This movement pattern often coincides with local/global search behavior, and visually resembles Lévy flight search, a search behavior conserved across species. Our mathematical analysis showed that while most of the animals exhibited Brownian walks, approximately 20% of the animals exhibited Lévy flights, indicating that C. elegans can use Lévy flights for efficient food search. In summary, our tracker and analysis software will help analyze the neural basis of the alteration and transition of C. elegans locomotory behavior in a food-deprived condition. PMID:26713869
Planetary Data Systems (PDS) Imaging Node Atlas II
NASA Technical Reports Server (NTRS)
Stanboli, Alice; McAuley, James M.
2013-01-01
The Planetary Image Atlas (PIA) is a Rich Internet Application (RIA) that serves planetary imaging data to the science community and the general public. PIA also utilizes the USGS Unified Planetary Coordinate system (UPC) and the on-Mars map server. The Atlas was designed to provide the ability to search and filter through greater than 8 million planetary image files. This software is a three-tier Web application that contains a search engine backend (MySQL, JAVA), Web service interface (SOAP) between server and client, and a GWT Google Maps API client front end. This application allows for the search, retrieval, and download of planetary images and associated meta-data from the following missions: 2001 Mars Odyssey, Cassini, Galileo, LCROSS, Lunar Reconnaissance Orbiter, Mars Exploration Rover, Mars Express, Magellan, Mars Global Surveyor, Mars Pathfinder, Mars Reconnaissance Orbiter, MESSENGER, Phoe nix, Viking Lander, Viking Orbiter, and Voyager. The Atlas utilizes the UPC to translate mission-specific coordinate systems into a unified coordinate system, allowing the end user to query across missions of similar targets. If desired, the end user can also use a mission-specific view of the Atlas. The mission-specific views rely on the same code base. This application is a major improvement over the initial version of the Planetary Image Atlas. It is a multi-mission search engine. This tool includes both basic and advanced search capabilities, providing a product search tool to interrogate the collection of planetary images. This tool lets the end user query information about each image, and ignores the data that the user has no interest in. Users can reduce the number of images to look at by defining an area of interest with latitude and longitude ranges.
Herrera, Samantha; Enuameh, Yeetey; Adjei, George; Ae-Ngibise, Kenneth Ayuurebobi; Asante, Kwaku Poku; Sankoh, Osman; Owusu-Agyei, Seth; Yé, Yazoume
2017-10-23
Lack of valid and reliable data on malaria deaths continues to be a problem that plagues the global health community. To address this gap, the verbal autopsy (VA) method was developed to ascertain cause of death at the population level. Despite the adoption and wide use of VA, there are many recognized limitations of VA tools and methods, especially for measuring malaria mortality. This study synthesizes the strengths and limitations of existing VA tools and methods for measuring malaria mortality (MM) in low- and middle-income countries through a systematic literature review. The authors searched PubMed, Cochrane Library, Popline, WHOLIS, Google Scholar, and INDEPTH Network Health and Demographic Surveillance System sites' websites from 1 January 1990 to 15 January 2016 for articles and reports on MM measurement through VA. article presented results from a VA study where malaria was a cause of death; article discussed limitations/challenges related to measurement of MM through VA. Two authors independently searched the databases and websites and conducted a synthesis of articles using a standard matrix. The authors identified 828 publications; 88 were included in the final review. Most publications were VA studies; others were systematic reviews discussing VA tools or methods; editorials or commentaries; and studies using VA data to develop MM estimates. The main limitation were low sensitivity and specificity of VA tools for measuring MM. Other limitations included lack of standardized VA tools and methods, lack of a 'true' gold standard to assess accuracy of VA malaria mortality. Existing VA tools and methods for measuring MM have limitations. Given the need for data to measure progress toward the World Health Organization's Global Technical Strategy for Malaria 2016-2030 goals, the malaria community should define strategies for improving MM estimates, including exploring whether VA tools and methods could be further improved. Longer term strategies should focus on improving countries' vital registration systems for more robust and timely cause of death data.
van Karnebeek, Clara D M; Houben, Roderick F A; Lafek, Mirafe; Giannasi, Wynona; Stockler, Sylvia
2012-07-23
Intellectual disability (ID) is a devastating and frequent condition, affecting 2-3% of the population worldwide. Early recognition of treatable underlying conditions drastically improves health outcomes and decreases burdens to patients, families and society. Our systematic literature review identified 81 such inborn errors of metabolism, which present with ID as a prominent feature and are amenable to causal therapy. The WebAPP translates this knowledge of rare diseases into a diagnostic tool and information portal. Freely available as a WebAPP via http://www.treatable-id.org and end 2012 via the APP store, this diagnostic tool is designed for all specialists evaluating children with global delay / ID and laboratory scientists. Information on the 81 diseases is presented in different ways with search functions: 15 biochemical categories, neurologic and non-neurologic signs & symptoms, diagnostic investigations (metabolic screening tests in blood and urine identify 65% of all IEM), therapies & effects on primary (IQ/developmental quotient) and secondary outcomes, and available evidence For each rare condition a 'disease page' serves as an information portal with online access to specific genetics, biochemistry, phenotype, diagnostic tests and therapeutic options. As new knowledge and evidence is gained from expert input and PubMed searches this tool will be continually updated. The WebAPP is an integral part of a protocol prioritizing treatability in the work-up of every child with global delay / ID. A 3-year funded study will enable an evaluation of its effectiveness. For rare diseases, a field for which financial and scientific resources are particularly scarce, knowledge translation challenges are abundant. With this WebAPP technology is capitalized to raise awareness for rare treatable diseases and their common presenting clinical feature of ID, with the potential to improve health outcomes. This innovative digital tool is designed to motivate health care providers to search actively for treatable causes of ID, and support an evidence-based approach to rare metabolic diseases. In our current -omics world with continuous information flow, the effective synthesis of data into accessible, clinical knowledge has become ever more essential to bridge the gap between research and care.
2012-01-01
Background Intellectual disability (ID) is a devastating and frequent condition, affecting 2-3% of the population worldwide. Early recognition of treatable underlying conditions drastically improves health outcomes and decreases burdens to patients, families and society. Our systematic literature review identified 81 such inborn errors of metabolism, which present with ID as a prominent feature and are amenable to causal therapy. The WebAPP translates this knowledge of rare diseases into a diagnostic tool and information portal. Methods & results Freely available as a WebAPP via http://www.treatable-id.org and end 2012 via the APP store, this diagnostic tool is designed for all specialists evaluating children with global delay / ID and laboratory scientists. Information on the 81 diseases is presented in different ways with search functions: 15 biochemical categories, neurologic and non-neurologic signs & symptoms, diagnostic investigations (metabolic screening tests in blood and urine identify 65% of all IEM), therapies & effects on primary (IQ/developmental quotient) and secondary outcomes, and available evidence For each rare condition a ‘disease page’ serves as an information portal with online access to specific genetics, biochemistry, phenotype, diagnostic tests and therapeutic options. As new knowledge and evidence is gained from expert input and PubMed searches this tool will be continually updated. The WebAPP is an integral part of a protocol prioritizing treatability in the work-up of every child with global delay / ID. A 3-year funded study will enable an evaluation of its effectiveness. Conclusions For rare diseases, a field for which financial and scientific resources are particularly scarce, knowledge translation challenges are abundant. With this WebAPP technology is capitalized to raise awareness for rare treatable diseases and their common presenting clinical feature of ID, with the potential to improve health outcomes. This innovative digital tool is designed to motivate health care providers to search actively for treatable causes of ID, and support an evidence-based approach to rare metabolic diseases. In our current –omics world with continuous information flow, the effective synthesis of data into accessible, clinical knowledge has become ever more essential to bridge the gap between research and care. PMID:22824307
Review of Twitter for infectious diseases clinicians: useful or a waste of time?
Goff, Debra A; Kullar, Ravina; Newland, Jason G
2015-05-15
Twitter is a social networking service that has emerged as a valuable tool for healthcare professionals (HCPs). It is the only platform that allows one to connect, engage, learn, and educate oneself and others in real time on a global scale. HCPs are using social media tools to communicate, educate, and engage with their peers worldwide. Twitter allows HCPs to deliver easily accessible "real-time" clinical information on a global scale. Twitter has more than 500 million active users who generate more than 58 million tweets and 2.1 billion search queries every day. Here, we explain why Twitter is important, how and when an infectious diseases (ID) HCP should use Twitter, the impact it has in disseminating ID news, and its educational value. We also describe various tools within Twitter, such as Twitter Chat, that connect and bond HCPs on a specific topic. Twitter may help ID HCPs teach others about the global responsible use of antimicrobials in a world of escalating antimicrobial resistance. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Hu, Jialu; Kehr, Birte; Reinert, Knut
2014-02-15
Owing to recent advancements in high-throughput technologies, protein-protein interaction networks of more and more species become available in public databases. The question of how to identify functionally conserved proteins across species attracts a lot of attention in computational biology. Network alignments provide a systematic way to solve this problem. However, most existing alignment tools encounter limitations in tackling this problem. Therefore, the demand for faster and more efficient alignment tools is growing. We present a fast and accurate algorithm, NetCoffee, which allows to find a global alignment of multiple protein-protein interaction networks. NetCoffee searches for a global alignment by maximizing a target function using simulated annealing on a set of weighted bipartite graphs that are constructed using a triplet approach similar to T-Coffee. To assess its performance, NetCoffee was applied to four real datasets. Our results suggest that NetCoffee remedies several limitations of previous algorithms, outperforms all existing alignment tools in terms of speed and nevertheless identifies biologically meaningful alignments. The source code and data are freely available for download under the GNU GPL v3 license at https://code.google.com/p/netcoffee/.
Parasail: SIMD C library for global, semi-global, and local pairwise sequence alignments.
Daily, Jeff
2016-02-10
Sequence alignment algorithms are a key component of many bioinformatics applications. Though various fast Smith-Waterman local sequence alignment implementations have been developed for x86 CPUs, most are embedded into larger database search tools. In addition, fast implementations of Needleman-Wunsch global sequence alignment and its semi-global variants are not as widespread. This article presents the first software library for local, global, and semi-global pairwise intra-sequence alignments and improves the performance of previous intra-sequence implementations. A faster intra-sequence local pairwise alignment implementation is described and benchmarked, including new global and semi-global variants. Using a 375 residue query sequence a speed of 136 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon E5-2670 24-core processor system, the highest reported for an implementation based on Farrar's 'striped' approach. Rognes's SWIPE optimal database search application is still generally the fastest available at 1.2 to at best 2.4 times faster than Parasail for sequences shorter than 500 amino acids. However, Parasail was faster for longer sequences. For global alignments, Parasail's prefix scan implementation is generally the fastest, faster even than Farrar's 'striped' approach, however the opal library is faster for single-threaded applications. The software library is designed for 64 bit Linux, OS X, or Windows on processors with SSE2, SSE41, or AVX2. Source code is available from https://github.com/jeffdaily/parasail under the Battelle BSD-style license. Applications that require optimal alignment scores could benefit from the improved performance. For the first time, SIMD global, semi-global, and local alignments are available in a stand-alone C library.
Archiving, sharing, processing and publishing historical earthquakes data: the IT point of view
NASA Astrophysics Data System (ADS)
Locati, Mario; Rovida, Andrea; Albini, Paola
2014-05-01
Digital tools devised for seismological data are mostly designed for handling instrumentally recorded data. Researchers working on historical seismology are forced to perform their daily job using a general purpose tool and/or coding their own to address their specific tasks. The lack of out-of-the-box tools expressly conceived to deal with historical data leads to a huge amount of time lost in performing tedious task to search for the data and, to manually reformat it in order to jump from one tool to the other, sometimes causing a loss of the original data. This reality is common to all activities related to the study of earthquakes of the past centuries, from the interpretations of past historical sources, to the compilation of earthquake catalogues. A platform able to preserve the historical earthquake data, trace back their source, and able to fulfil many common tasks was very much needed. In the framework of two European projects (NERIES and SHARE) and one global project (Global Earthquake History, GEM), two new data portals were designed and implemented. The European portal "Archive of Historical Earthquakes Data" (AHEAD) and the worldwide "Global Historical Earthquake Archive" (GHEA), are aimed at addressing at least some of the above mentioned issues. The availability of these new portals and their well-defined standards makes it easier than before the development of side tools for archiving, publishing and processing the available historical earthquake data. The AHEAD and GHEA portals, their underlying technologies and the developed side tools are presented.
A Memetic Algorithm for Global Optimization of Multimodal Nonseparable Problems.
Zhang, Geng; Li, Yangmin
2016-06-01
It is a big challenging issue of avoiding falling into local optimum especially when facing high-dimensional nonseparable problems where the interdependencies among vector elements are unknown. In order to improve the performance of optimization algorithm, a novel memetic algorithm (MA) called cooperative particle swarm optimizer-modified harmony search (CPSO-MHS) is proposed in this paper, where the CPSO is used for local search and the MHS for global search. The CPSO, as a local search method, uses 1-D swarm to search each dimension separately and thus converges fast. Besides, it can obtain global optimum elements according to our experimental results and analyses. MHS implements the global search by recombining different vector elements and extracting global optimum elements. The interaction between local search and global search creates a set of local search zones, where global optimum elements reside within the search space. The CPSO-MHS algorithm is tested and compared with seven other optimization algorithms on a set of 28 standard benchmarks. Meanwhile, some MAs are also compared according to the results derived directly from their corresponding references. The experimental results demonstrate a good performance of the proposed CPSO-MHS algorithm in solving multimodal nonseparable problems.
Gupta, Rajnish K; McEvoy, Matthew D
2016-01-01
Decision support tools have been demonstrated to improve adherence to medical guidelines; however, smartphone applications (apps) have not been studied in this regard. In a collaboration between Vanderbilt University and the American Society of Regional Anesthesia and Pain Medicine (ASRA), the ASRA Coags Regional app was created to be a decision support tool for the 2010 published guideline on regional anesthesia for patients receiving anticoagulation. This is a review of the distribution and usage of this app. The app was created to be a user-friendly version of the guideline. Download statistics were collected from April 2014 to October 2015, and app usage data were collected from October 2014 to October 2015. Usage data were analyzed for number of devices, number of search sessions, medications searched, and types of procedures. There were 8381 downloads, with 83% from North America. Of users who allowed data tracking, 4504 unique devices were identified with 30,003 separate search events. The most searched-for medications were rivaroxaban (n = 4427; 11%), clopidogrel (n = 4042; 10%), and enoxaparin, prophylactic twice daily dosing (n = 3249; 8%). Neuraxial procedures (n = 22,477; 78%) were the most commonly searched-for procedures and over half (n = 22,773; 52%) the users were interested in how long to hold a medication before performing a procedure. This is the first publication of download and usage data concerning medical smartphone apps. It provides a template for future app uptake and use in clinical practice. The app platform provides a new mechanism of rapidly disseminating guidelines and facilitating distribution of frequent updates.
Improving Scientific Metadata Interoperability And Data Discoverability using OAI-PMH
NASA Astrophysics Data System (ADS)
Devarakonda, Ranjeet; Palanisamy, Giri; Green, James M.; Wilson, Bruce E.
2010-12-01
While general-purpose search engines (such as Google or Bing) are useful for finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. However, there are a number of different protocols for harvesting metadata, with some challenges for ensuring that updates are propagated and for collaborations with repositories using differing metadata standards. The Open Archive Initiative Protocol for Metadata Handling (OAI-PMH) is a standard that is seeing increased use as a means for exchanging structured metadata. OAI-PMH implementations must support Dublin Core as a metadata standard, with other metadata formats as optional. We have developed tools which enable our structured search tool (Mercury; http://mercury.ornl.gov) to consume metadata from OAI-PMH services in any of the metadata formats we support (Dublin Core, Darwin Core, FCDC CSDGM, GCMD DIF, EML, and ISO 19115/19137). We are also making ORNL DAAC metadata available through OAI-PMH for other metadata tools to utilize, such as the NASA Global Change Master Directory, GCMD). This paper describes Mercury capabilities with multiple metadata formats, in general, and, more specifically, the results of our OAI-PMH implementations and the lessons learned. References: [1] R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. [2] R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010). [3] Devarakonda, R.; Palanisamy, G.; Green, J.; Wilson, B. E. "Mercury: An Example of Effective Software Reuse for Metadata Management Data Discovery and Access", Eos Trans. AGU, 89(53), Fall Meet. Suppl., IN11A-1019 (2008).
NASA Astrophysics Data System (ADS)
Fang, H.; Kato, H.; Rodell, M.; Teng, W. L.; Vollmer, B. E.
2008-12-01
The Global Land Data Assimilation System (GLDAS) has been generating a series of land surface state (e.g., soil moisture and surface temperature) and flux (e.g., evaporation and sensible heat flux) products, simulated by four land surface models (CLM, Mosaic, Noah and VIC). These products are now accessible at the Hydrology Data and Information Services Center (HDISC), a component of the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Current GLDAS data hosted at HDISC include a set of 1.0° data products, covering 1979 to the present, from the four models and a 0.25° data product, covering 2000 to the present, from the Noah model. In addition to the basic anonymous ftp data downloading, users can avail themselves of several advanced data search and downloading services, such as Mirador and OPeNDAP. Mirador is a Google-based search tool that provides keywords searching, on-the-fly spatial and parameter subsetting of selected data. OPeNDAP (Open-source Project for a Network Data Access Protocol) enables remote OPeNDAP clients to access OPeNDAP served data regardless of local storage format. Additional data services to be available in the near future from HDISC include (1) on-the-fly converter of GLDAS to NetCDF and binary data formats; (2) temporal aggregation of GLDAS files; and (3) Giovanni, an online visualization and analysis tool that provides a simple way to visualize, analyze, and access vast amounts of data without having to download the data.
From molecule to solid: The prediction of organic crystal structures
NASA Astrophysics Data System (ADS)
Dzyabchenko, A. V.
2008-10-01
A method for predicting the structure of a molecular crystal based on the systematic search for a global potential energy minimum is considered. The method takes into account unequal occurrences of the structural classes of organic crystals and symmetry of the multidimensional configuration space. The programs of global minimization PMC, comparison of crystal structures CRYCOM, and approximation to the distributions of the electrostatic potentials of molecules FitMEP are presented as tools for numerically solving the problem. Examples of predicted structures substantiated experimentally and the experience of author’s participation in international tests of crystal structure prediction organized by the Cambridge Crystallographic Data Center (Cambridge, UK) are considered.
Report on the Global Data Assembly Center (GDAC) to the 12th GHRSST Science Team Meeting
NASA Technical Reports Server (NTRS)
Armstrong, Edward M.; Bingham, Andrew; Vazquez, Jorge; Thompson, Charles; Huang, Thomas; Finch, Chris
2011-01-01
In 2010/2011 the Global Data Assembly Center (GDAC) at NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC) continued its role as the primary clearinghouse and access node for operational Group for High Resolution Sea Surface Temperature (GHRSST) datastreams, as well as its collaborative role with the NOAA Long Term Stewardship and Reanalysis Facility (LTSRF) for archiving. Here we report on our data management activities and infrastructure improvements since the last science team meeting in June 2010.These include the implementation of all GHRSST datastreams in the new PO.DAAC Data Management and Archive System (DMAS) for more reliable and timely data access. GHRSST dataset metadata are now stored in a new database that has made the maintenance and quality improvement of metadata fields more straightforward. A content management system for a revised suite of PO.DAAC web pages allows dynamic access to a subset of these metadata fields for enhanced dataset description as well as discovery through a faceted search mechanism from the perspective of the user. From the discovery and metadata standpoint the GDAC has also implemented the NASA version of the OpenSearch protocol for searching for GHRSST granules and developed a web service to generate ISO 19115-2 compliant metadata records. Furthermore, the GDAC has continued to implement a new suite of tools and services for GHRSST datastreams including a Level 2 subsetter known as Dataminer, a revised POET Level 3/4 subsetter and visualization tool, a Google Earth interface to selected daily global Level 2 and Level 4 data, and experimented with a THREDDS catalog of GHRSST data collections. Finally we will summarize the expanding user and data statistics, and other metrics that we have collected over the last year demonstrating the broad user community and applications that the GHRSST project continues to serve via the GDAC distribution mechanisms. This report also serves by extension to summarize the activities of the GHRSST Data Assembly and Systems Technical Advisory Group (DAS-TAG).
Liau, Siow Yen; Mohamed Izham, M I; Hassali, M A; Shafie, A A
2010-01-01
Cardiovascular diseases, the main causes of hospitalisations and death globally, have put an enormous economic burden on the healthcare system. Several risk factors are associated with the occurrence of cardiovascular events. At the heart of efficient prevention of cardiovascular disease is the concept of risk assessment. This paper aims to review the available cardiovascular risk-assessment tools and its applicability in predicting cardiovascular risk among Asian populations. A systematic search was performed using keywords as MeSH and Boolean terms. A total of 25 risk-assessment tools were identified. Of these, only two risk-assessment tools (8%) were derived from an Asian population. These risk-assessment tools differ in various ways, including characteristics of the derivation sample, type of study, time frame of follow-up, end points, statistical analysis and risk factors included. Very few cardiovascular risk-assessment tools were developed in Asian populations. In order to accurately predict the cardiovascular risk of our population, there is a need to develop a risk-assessment tool based on local epidemiological data.
National Centers for Environmental Prediction
Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather / VISION | About EMC EMC > GLOBAL BRANCH > GFS > HOME Home Implementations Documentation References Products Model Guidance Performance Developers VLab GLOBAL FORECAST SYSTEM Global Data
Creating global comparative analyses of tectonic rifts, monogenetic volcanism and inverted relief
NASA Astrophysics Data System (ADS)
van Wyk de Vries, Benjamin
2016-04-01
I have been all around the world, and to other planets and have travelled from the present to the Archaean and back to seek out the most significant tectonic rifts, monogenetic volcanoes and examples of inverted relief. I have done this to provide a broad foundation of the comparative analysis for the Chaîne des Puys - Limagne fault nomination to UNESCO world Heritage. This would have been an impossible task, if not for the cooperation of the scientific community and for Google Earth, Google Maps and academic search engines. In preparing global comparisons of geological features, these quite recently developed tools provide a powerful way to find and describe geological features. The ability to do scientific crowd sourcing, rapidly discussing with colleagues about features, allows large numbers of areas to be checked and the open GIS tools (such as Google Earth) allow a standardised description. Search engines also allow the literature on areas to be checked and compared. I will present a comparative study of rifts of the world, monogenetic volcanic field and inverted relief, integrated to analyse the full geological system represented by the Chaîne des Puys - Limagne fault. The analysis confirms that the site is an exceptional example of the first steps of continental drift in a mountain rift setting, and that this is necessarily seen through the combined landscape of tectonic, volcanic and geomorphic features. The analysis goes further to deepen the understanding of geological systems and stresses the need for more study on geological heritage using such a global and broad systems approach.
Patient-Centered Tools for Medication Information Search
Wilcox, Lauren; Feiner, Steven; Elhadad, Noémie; Vawdrey, David; Tran, Tran H.
2016-01-01
Recent research focused on online health information seeking highlights a heavy reliance on general-purpose search engines. However, current general-purpose search interfaces do not necessarily provide adequate support for non-experts in identifying suitable sources of health information. Popular search engines have recently introduced search tools in their user interfaces for a range of topics. In this work, we explore how such tools can support non-expert, patient-centered health information search. Scoping the current work to medication-related search, we report on findings from a formative study focused on the design of patient-centered, medication-information search tools. Our study included qualitative interviews with patients, family members, and domain experts, as well as observations of their use of Remedy, a technology probe embodying a set of search tools. Post-operative cardiothoracic surgery patients and their visiting family members used the tools to find information about their hospital medications and were interviewed before and after their use. Domain experts conducted similar search tasks and provided qualitative feedback on their preferences and recommendations for designing these tools. Findings from our study suggest the importance of four valuation principles underlying our tools: credibility, readability, consumer perspective, and topical relevance. PMID:28163972
Patient-Centered Tools for Medication Information Search.
Wilcox, Lauren; Feiner, Steven; Elhadad, Noémie; Vawdrey, David; Tran, Tran H
2014-05-20
Recent research focused on online health information seeking highlights a heavy reliance on general-purpose search engines. However, current general-purpose search interfaces do not necessarily provide adequate support for non-experts in identifying suitable sources of health information. Popular search engines have recently introduced search tools in their user interfaces for a range of topics. In this work, we explore how such tools can support non-expert, patient-centered health information search. Scoping the current work to medication-related search, we report on findings from a formative study focused on the design of patient-centered, medication-information search tools. Our study included qualitative interviews with patients, family members, and domain experts, as well as observations of their use of Remedy, a technology probe embodying a set of search tools. Post-operative cardiothoracic surgery patients and their visiting family members used the tools to find information about their hospital medications and were interviewed before and after their use. Domain experts conducted similar search tasks and provided qualitative feedback on their preferences and recommendations for designing these tools. Findings from our study suggest the importance of four valuation principles underlying our tools: credibility, readability, consumer perspective, and topical relevance.
Search Engines for Tomorrow's Scholars
ERIC Educational Resources Information Center
Fagan, Jody Condit
2011-01-01
Today's scholars face an outstanding array of choices when choosing search tools: Google Scholar, discipline-specific abstracts and index databases, library discovery tools, and more recently, Microsoft's re-launch of their academic search tool, now dubbed Microsoft Academic Search. What are these tools' strengths for the emerging needs of…
MO/DSD online information server and global information repository access
NASA Technical Reports Server (NTRS)
Nguyen, Diem; Ghaffarian, Kam; Hogie, Keith; Mackey, William
1994-01-01
Often in the past, standards and new technology information have been available only in hardcopy form, with reproduction and mailing costs proving rather significant. In light of NASA's current budget constraints and in the interest of efficient communications, the Mission Operations and Data Systems Directorate (MO&DSD) New Technology and Data Standards Office recognizes the need for an online information server (OLIS). This server would allow: (1) dissemination of standards and new technology information throughout the Directorate more quickly and economically; (2) online browsing and retrieval of documents that have been published for and by MO&DSD; and (3) searching for current and past study activities on related topics within NASA before issuing a task. This paper explores a variety of available information servers and searching tools, their current capabilities and limitations, and the application of these tools to MO&DSD. Most importantly, the discussion focuses on the way this concept could be easily applied toward improving dissemination of standards and new technologies and improving documentation processes.
National Centers for Environmental Prediction
Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather / VISION | About EMC Click on a model logo to go to its home page RTOFS Global RTOFS Global RTOFS Atlantic
Comet: an open-source MS/MS sequence database search tool.
Eng, Jimmy K; Jahan, Tahmina A; Hoopmann, Michael R
2013-01-01
Proteomics research routinely involves identifying peptides and proteins via MS/MS sequence database search. Thus the database search engine is an integral tool in many proteomics research groups. Here, we introduce the Comet search engine to the existing landscape of commercial and open-source database search tools. Comet is open source, freely available, and based on one of the original sequence database search tools that has been widely used for many years. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Powell, John D.; Owens, David; Menzies, Tim
2004-01-01
The difficulty of how to test large systems, such as the one on board a NASA robotic remote explorer (RRE) vehicle, is fundamentally a search issue: the global state space representing all possible has yet to be solved, even after many decades of work. Randomized algorithms have been known to outperform their deterministic counterparts for search problems representing a wide range of applications. In the case study presented here, the LURCH randomized algorithm proved to be adequate to the task of testing a NASA RRE vehicle. LURCH found all the errors found by an earlier analysis of a more complete method (SPIN). Our empirical results are that LURCH can scale to much larger models than standard model checkers like SMV and SPIN. Further, the LURCH analysis was simpler than the SPIN analysis. The simplicity and scalability of LURCH are two compelling reasons for experimenting further with this tool.
Stansfield, Claire; O'Mara-Eves, Alison; Thomas, James
2017-09-01
Using text mining to aid the development of database search strings for topics described by diverse terminology has potential benefits for systematic reviews; however, methods and tools for accomplishing this are poorly covered in the research methods literature. We briefly review the literature on applications of text mining for search term development for systematic reviewing. We found that the tools can be used in 5 overarching ways: improving the precision of searches; identifying search terms to improve search sensitivity; aiding the translation of search strategies across databases; searching and screening within an integrated system; and developing objectively derived search strategies. Using a case study and selected examples, we then reflect on the utility of certain technologies (term frequency-inverse document frequency and Termine, term frequency, and clustering) in improving the precision and sensitivity of searches. Challenges in using these tools are discussed. The utility of these tools is influenced by the different capabilities of the tools, the way the tools are used, and the text that is analysed. Increased awareness of how the tools perform facilitates the further development of methods for their use in systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.
2017-01-01
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718
Are electronic nicotine delivery systems an effective smoking cessation tool?
Lam, Christine; West, Andrew
2015-01-01
Recent studies have estimated that 21% of all deaths over the past decade are due to smoking, making it the leading cause of premature death in Canada. To date, many steps have been taken to eradicate the global epidemic of tobacco smoking. Most recently, electronic nicotine delivery systems (ENDS) have become a popular smoking cessation tool. ENDS do not burn or use tobacco leaves, but instead vapourize a solution the user then inhales. The main constituents of the solution, in addition to nicotine when nicotine is present, are propylene glycol, with or without glycerol and flavouring agents. Currently, ENDS are not regulated, and have become a controversial topic. To determine whether ENDS are an effective smoking cessation tool. A systematic literature search was conducted in February 2015 using the following databases: PubMed, Scopus and Web of Science Core Collection. Randomized controlled trials were the only publications included in the search. A secondary search was conducted by reviewing the references of relevant publications. After conducting the primary and secondary search, 109 publications were identified. After applying all inclusion and exclusion criteria through abstract and full-text review, four publications were included in the present literature review. A low risk of bias was established for each included study using the Cochrane Collaboration risk of bias evaluation framework. The primary outcome measured in all studies was self-reported abstinence or reduction from smoking. In three of the four studies, self-reported abstinence or reduction from smoking was verified by measuring exhaled carbon monoxide. In the remaining study, the primary outcome measured was self-reported desire to smoke and measured desire to smoke. All four studies showed promise that ENDS are an effective smoking cessation tool. While all publications included in the present review revealed that ENDS are effective smoking cessation aid, further evaluation of the potential health effects in long-term use of ENDS remains vital.
NASA Astrophysics Data System (ADS)
Rose, K.; Bauer, J.; Baker, D.; Barkhurst, A.; Bean, A.; DiGiulio, J.; Jones, K.; Jones, T.; Justman, D.; Miller, R., III; Romeo, L.; Sabbatino, M.; Tong, A.
2017-12-01
As spatial datasets are increasingly accessible through open, online systems, the opportunity to use these resources to address a range of Earth system questions grows. Simultaneously, there is a need for better infrastructure and tools to find and utilize these resources. We will present examples of advanced online computing capabilities, hosted in the U.S. DOE's Energy Data eXchange (EDX), that address these needs for earth-energy research and development. In one study the computing team developed a custom, machine learning, big data computing tool designed to parse the web and return priority datasets to appropriate servers to develop an open-source global oil and gas infrastructure database. The results of this spatial smart search approach were validated against expert-driven, manual search results which required a team of seven spatial scientists three months to produce. The custom machine learning tool parsed online, open systems, including zip files, ftp sites and other web-hosted resources, in a matter of days. The resulting resources were integrated into a geodatabase now hosted for open access via EDX. Beyond identifying and accessing authoritative, open spatial data resources, there is also a need for more efficient tools to ingest, perform, and visualize multi-variate, spatial data analyses. Within the EDX framework, there is a growing suite of processing, analytical and visualization capabilities that allow multi-user teams to work more efficiently in private, virtual workspaces. An example of these capabilities are a set of 5 custom spatio-temporal models and data tools that form NETL's Offshore Risk Modeling suite that can be used to quantify oil spill risks and impacts. Coupling the data and advanced functions from EDX with these advanced spatio-temporal models has culminated with an integrated web-based decision-support tool. This platform has capabilities to identify and combine data across scales and disciplines, evaluate potential environmental, social, and economic impacts, highlight knowledge or technology gaps, and reduce uncertainty for a range of `what if' scenarios relevant to oil spill prevention efforts. These examples illustrate EDX's growing capabilities for advanced spatial data search and analysis to support geo-data science needs.
Shrader, Sarah; Farland, Michelle Z; Danielson, Jennifer; Sicat, Brigitte; Umland, Elena M
2017-08-01
Objective. To identify and describe the available quantitative tools that assess interprofessional education (IPE) relevant to pharmacy education. Methods. A systematic approach was used to identify quantitative IPE assessment tools relevant to pharmacy education. The search strategy included the National Center for Interprofessional Practice and Education Resource Exchange (Nexus) website, a systematic search of the literature, and a manual search of journals deemed likely to include relevant tools. Results. The search identified a total of 44 tools from the Nexus website, 158 abstracts from the systematic literature search, and 570 abstracts from the manual search. A total of 36 assessment tools met the criteria to be included in the summary, and their application to IPE relevant to pharmacy education was discussed. Conclusion. Each of the tools has advantages and disadvantages. No single comprehensive tool exists to fulfill assessment needs. However, numerous tools are available that can be mapped to IPE-related accreditation standards for pharmacy education.
National Centers for Environmental Prediction
Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Products People GLOBAL CLIMATE & WEATHER MODELING Global Forecast System (GFS) products - Please see
Federated Search Tools in Fusion Centers: Bridging Databases in the Information Sharing Environment
2012-09-01
considerable variation in how fusion centers plan for, gather requirements, select and acquire federated search tools to bridge disparate databases...centers, when considering integrating federated search tools; by evaluating the importance of the planning, requirements gathering, selection and...acquisition processes for integrating federated search tools; by acknowledging the challenges faced by some fusion centers during these integration processes
Screening and assessment tools for pediatric malnutrition.
Huysentruyt, Koen; Vandenplas, Yvan; De Schepper, Jean
2016-06-18
The ideal measures for screening and assessing undernutrition in children remain a point of discussion in literature. This review aims to provide an overview of recent advances in the nutritional screening and assessment methods in children. This review focuses on two major topics that emerged in literature since 2015: the practical endorsement of the new definition for pediatric undernutrition, with a focus on anthropometric measurements and the search for a consensus on pediatric nutritional screening tools in different settings. Few analytical tools exist for the assessment of the nutritional status in children. The subjective global nutritional assessment has been validated by anthropometric as well as clinical outcome parameters. Nutritional screening can help in selecting patients that benefit the most from a full nutritional assessment. Two new screening tools have been developed for use in a general (mixed) hospital population, and one for a population of children with cancer. The value of screening tools in different disease-specific and outpatient pediatric populations remains to be proven.
Peirson, Leslea; Catallo, Cristina; Chera, Sunita
2013-08-01
This paper examines the development of a globally accessible online Registry of Knowledge Translation Methods and Tools to support evidence-informed public health. A search strategy, screening and data extraction tools, and writing template were developed to find, assess, and summarize relevant methods and tools. An interactive website and searchable database were designed to house the registry. Formative evaluation was undertaken to inform refinements. Over 43,000 citations were screened; almost 700 were full-text reviewed, 140 of which were included. By November 2012, 133 summaries were available. Between January 1 and November 30, 2012 over 32,945 visitors from more than 190 countries accessed the registry. Results from 286 surveys and 19 interviews indicated the registry is valued and useful, but would benefit from a more intuitive indexing system and refinements to the summaries. User stories and promotional activities help expand the reach and uptake of knowledge translation methods and tools in public health contexts. The National Collaborating Centre for Methods and Tools' Registry of Methods and Tools is a unique and practical resource for public health decision makers worldwide.
The survey on data format of Earth observation satellite data at JAXA.
NASA Astrophysics Data System (ADS)
Matsunaga, M.; Ikehata, Y.
2017-12-01
JAXA's earth observation satellite data are distributed by a portal web site for search and deliver called "G-Portal". Users can download the satellite data of GPM, TRMM, Aqua, ADEOS-II, ALOS (search only), ALOS-2 (search only), MOS-1, MOS-1b, ERS-1 and JERS-1 from G-Portal. However, these data formats are different by each satellite like HDF4, HDF5, NetCDF4, CEOS, etc., and which formats are not familiar to new data users. Although the HDF type self-describing format is very convenient and useful for big dataset information, old-type format product is not readable by open GIS tool nor apply OGC standard. Recently, the satellite data are widely used to be applied to the various needs such as disaster, earth resources, monitoring the global environment, Geographic Information System(GIS) and so on. In order to remove a barrier of using Earth Satellite data for new community users, JAXA has been providing the format-converted product like GeoTIFF or KMZ. In addition, JAXA provides format conversion tool itself. We investigate the trend of data format for data archive, data dissemination and data utilization, then we study how to improve the current product format for various application field users and make a recommendation for new product.
On contact modelling in isogeometric analysis
NASA Astrophysics Data System (ADS)
Cardoso, R. P. R.; Adetoro, O. B.
2017-11-01
IsoGeometric Analysis (IGA) has proved to be a reliable numerical tool for the simulation of structural behaviour and fluid mechanics. The main reasons for this popularity are essentially due to: (i) the possibility of using higher order polynomials for the basis functions; (ii) the high convergence rates possible to achieve; (iii) the possibility to operate directly on CAD geometry without the need to resort to a mesh of elements. The major drawback of IGA is the non-interpolatory characteristic of the basis functions, which adds a difficulty in handling essential boundary conditions and makes it particularly challenging for contact analysis. In this work, the IGA is expanded to include frictionless contact procedures for sheet metal forming analyses. Non-Uniform Rational B-Splines (NURBS) are going to be used for the modelling of rigid tools as well as for the modelling of the deformable blank sheet. The contact methods developed are based on a two-step contact search scheme, where during the first step a global search algorithm is used for the allocation of contact knots into potential contact faces and a second (local) contact search scheme where point inversion techniques are used for the calculation of the contact penetration gap. For completeness, elastoplastic procedures are also included for a proper description of the entire IGA of sheet metal forming processes.
Application of the gravity search algorithm to multi-reservoir operation optimization
NASA Astrophysics Data System (ADS)
Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.
2016-12-01
Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.
Leroy, Gondy; Xu, Jennifer; Chung, Wingyan; Eggers, Shauna; Chen, Hsinchun
2007-01-01
Retrieving sufficient relevant information online is difficult for many people because they use too few keywords to search and search engines do not provide many support tools. To further complicate the search, users often ignore support tools when available. Our goal is to evaluate in a realistic setting when users use support tools and how they perceive these tools. We compared three medical search engines with support tools that require more or less effort from users to form a query and evaluate results. We carried out an end user study with 23 users who were asked to find information, i.e., subtopics and supporting abstracts, for a given theme. We used a balanced within-subjects design and report on the effectiveness, efficiency and usability of the support tools from the end user perspective. We found significant differences in efficiency but did not find significant differences in effectiveness between the three search engines. Dynamic user support tools requiring less effort led to higher efficiency. Fewer searches were needed and more documents were found per search when both query reformulation and result review tools dynamically adjust to the user query. The query reformulation tool that provided a long list of keywords, dynamically adjusted to the user query, was used most often and led to more subtopics. As hypothesized, the dynamic result review tools were used more often and led to more subtopics than static ones. These results were corroborated by the usability questionnaires, which showed that support tools that dynamically optimize output were preferred.
Development of a PubMed Based Search Tool for Identifying Sex and Gender Specific Health Literature.
Song, Michael M; Simonsen, Cheryl K; Wilson, Joanna D; Jenkins, Marjorie R
2016-02-01
An effective literature search strategy is critical to achieving the aims of Sex and Gender Specific Health (SGSH): to understand sex and gender differences through research and to effectively incorporate the new knowledge into the clinical decision making process to benefit both male and female patients. The goal of this project was to develop and validate an SGSH literature search tool that is readily and freely available to clinical researchers and practitioners. PubMed, a freely available search engine for the Medline database, was selected as the platform to build the SGSH literature search tool. Combinations of Medical Subject Heading terms, text words, and title words were evaluated for optimal specificity and sensitivity. The search tool was then validated against reference bases compiled for two disease states, diabetes and stroke. Key sex and gender terms and limits were bundled to create a search tool to facilitate PubMed SGSH literature searches. During validation, the search tool retrieved 50 of 94 (53.2%) stroke and 62 of 95 (65.3%) diabetes reference articles selected for validation. A general keyword search of stroke or diabetes combined with sex difference retrieved 33 of 94 (35.1%) stroke and 22 of 95 (23.2%) diabetes reference base articles, with lower sensitivity and specificity for SGSH content. The Texas Tech University Health Sciences Center SGSH PubMed Search Tool provides higher sensitivity and specificity to sex and gender specific health literature. The tool will facilitate research, clinical decision-making, and guideline development relevant to SGSH.
Development of a PubMed Based Search Tool for Identifying Sex and Gender Specific Health Literature
Song, Michael M.; Simonsen, Cheryl K.; Wilson, Joanna D.
2016-01-01
Abstract Background: An effective literature search strategy is critical to achieving the aims of Sex and Gender Specific Health (SGSH): to understand sex and gender differences through research and to effectively incorporate the new knowledge into the clinical decision making process to benefit both male and female patients. The goal of this project was to develop and validate an SGSH literature search tool that is readily and freely available to clinical researchers and practitioners. Methods: PubMed, a freely available search engine for the Medline database, was selected as the platform to build the SGSH literature search tool. Combinations of Medical Subject Heading terms, text words, and title words were evaluated for optimal specificity and sensitivity. The search tool was then validated against reference bases compiled for two disease states, diabetes and stroke. Results: Key sex and gender terms and limits were bundled to create a search tool to facilitate PubMed SGSH literature searches. During validation, the search tool retrieved 50 of 94 (53.2%) stroke and 62 of 95 (65.3%) diabetes reference articles selected for validation. A general keyword search of stroke or diabetes combined with sex difference retrieved 33 of 94 (35.1%) stroke and 22 of 95 (23.2%) diabetes reference base articles, with lower sensitivity and specificity for SGSH content. Conclusions: The Texas Tech University Health Sciences Center SGSH PubMed Search Tool provides higher sensitivity and specificity to sex and gender specific health literature. The tool will facilitate research, clinical decision-making, and guideline development relevant to SGSH. PMID:26555409
Improved Genetic Algorithm Based on the Cooperation of Elite and Inverse-elite
NASA Astrophysics Data System (ADS)
Kanakubo, Masaaki; Hagiwara, Masafumi
In this paper, we propose an improved genetic algorithm based on the combination of Bee system and Inverse-elitism, both are effective strategies for the improvement of GA. In the Bee system, in the beginning, each chromosome tries to find good solution individually as global search. When some chromosome is regarded as superior one, the other chromosomes try to find solution around there. However, since chromosomes for global search are generated randomly, Bee system lacks global search ability. On the other hand, in the Inverse-elitism, an inverse-elite whose gene values are reversed from the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes, but it lacks local search ability. In the proposed method, the Inverse-elitism with Pseudo-simplex method is employed for global search of Bee system in order to strengthen global search ability. In addition, it also has strong local search ability. The proposed method has synergistic effects of the three strategies. We confirmed validity and superior performance of the proposed method by computer simulations.
Criteria for Comparing Children's Web Search Tools.
ERIC Educational Resources Information Center
Kuntz, Jerry
1999-01-01
Presents criteria for evaluating and comparing Web search tools designed for children. Highlights include database size; accountability; categorization; search access methods; help files; spell check; URL searching; links to alternative search services; advertising; privacy policy; and layout and design. (LRW)
Finding Protein and Nucleotide Similarities with FASTA
Pearson, William R.
2016-01-01
The FASTA programs provide a comprehensive set of rapid similarity searching tools ( fasta36, fastx36, tfastx36, fasty36, tfasty36), similar to those provided by the BLAST package, as well as programs for slower, optimal, local and global similarity searches ( ssearch36, ggsearch36) and for searching with short peptides and oligonucleotides ( fasts36, fastm36). The FASTA programs use an empirical strategy for estimating statistical significance that accommodates a range of similarity scoring matrices and gap penalties, improving alignment boundary accuracy and search sensitivity (Unit 3.5). The FASTA programs can produce “BLAST-like” alignment and tabular output, for ease of integration into existing analysis pipelines, and can search small, representative databases, and then report results for a larger set of sequences, using links from the smaller dataset. The FASTA programs work with a wide variety of database formats, including mySQL and postgreSQL databases (Unit 9.4). The programs also provide a strategy for integrating domain and active site annotations into alignments and highlighting the mutational state of functionally critical residues. These protocols describe how to use the FASTA programs to characterize protein and DNA sequences, using protein:protein, protein:DNA, and DNA:DNA comparisons. PMID:27010337
Finding Protein and Nucleotide Similarities with FASTA.
Pearson, William R
2016-03-24
The FASTA programs provide a comprehensive set of rapid similarity searching tools (fasta36, fastx36, tfastx36, fasty36, tfasty36), similar to those provided by the BLAST package, as well as programs for slower, optimal, local, and global similarity searches (ssearch36, ggsearch36), and for searching with short peptides and oligonucleotides (fasts36, fastm36). The FASTA programs use an empirical strategy for estimating statistical significance that accommodates a range of similarity scoring matrices and gap penalties, improving alignment boundary accuracy and search sensitivity. The FASTA programs can produce "BLAST-like" alignment and tabular output, for ease of integration into existing analysis pipelines, and can search small, representative databases, and then report results for a larger set of sequences, using links from the smaller dataset. The FASTA programs work with a wide variety of database formats, including mySQL and postgreSQL databases. The programs also provide a strategy for integrating domain and active site annotations into alignments and highlighting the mutational state of functionally critical residues. These protocols describe how to use the FASTA programs to characterize protein and DNA sequences, using protein:protein, protein:DNA, and DNA:DNA comparisons. Copyright © 2016 John Wiley & Sons, Inc.
Yu, Yao; Tu, Kang; Zheng, Siyuan; Li, Yun; Ding, Guohui; Ping, Jie; Hao, Pei; Li, Yixue
2009-08-25
In the post-genomic era, the development of high-throughput gene expression detection technology provides huge amounts of experimental data, which challenges the traditional pipelines for data processing and analyzing in scientific researches. In our work, we integrated gene expression information from Gene Expression Omnibus (GEO), biomedical ontology from Medical Subject Headings (MeSH) and signaling pathway knowledge from sigPathway entries to develop a context mining tool for gene expression analysis - GEOGLE. GEOGLE offers a rapid and convenient way for searching relevant experimental datasets, pathways and biological terms according to multiple types of queries: including biomedical vocabularies, GDS IDs, gene IDs, pathway names and signature list. Moreover, GEOGLE summarizes the signature genes from a subset of GDSes and estimates the correlation between gene expression and the phenotypic distinction with an integrated p value. This approach performing global searching of expression data may expand the traditional way of collecting heterogeneous gene expression experiment data. GEOGLE is a novel tool that provides researchers a quantitative way to understand the correlation between gene expression and phenotypic distinction through meta-analysis of gene expression datasets from different experiments, as well as the biological meaning behind. The web site and user guide of GEOGLE are available at: http://omics.biosino.org:14000/kweb/workflow.jsp?id=00020.
Federal global change data plan reviewed
NASA Astrophysics Data System (ADS)
Simarski, Lynn Teo
1992-02-01
Scientists and data managers are grappling with an unprecedented challenge: how to handle the explosion of data being produced by global change research. The federal government is developing a plan to manage data among the various federal agencies that participate in the U.S. Global Change Research Program. From January 22 to 24, some 80 scientists, data managers, and officials from federal agencies, universities, laboratories, and other institutions met in Washington, D.C. to critique the draft plan. New observational tools are expected to increase the flow of global change data to ever more massive proportions, while all the data now available is not catalogued properly. Even now, if a researcher does manage to find appropriate data, it may not be documented sufficiently to use. “These practical difficulties are especially acute for global change researchers, who need to search for data and information very broadly across scientific disciplines and sometimes decades after the data were archived,” explains the draft plan by the Committee on Earth and Environmental Sciences of the Office of Science and Technology Policy.
Ringed Seal Search for Global Optimization via a Sensitive Search Model.
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems.
Use of instruments to evaluate leadership in nursing and health services.
Carrara, Gisleangela Lima Rodrigues; Bernardes, Andrea; Balsanelli, Alexandre Pazetto; Camelo, Silvia Helena Henriques; Gabriel, Carmen Silvia; Zanetti, Ariane Cristina Barboza
2018-03-12
To identify the available scientific evidence about the use of instruments for the evaluation of leadership in health and nursing services and verify the use of leadership styles/models/theories in the construction of these tools. Integrative literature review of indexed studies in the LILACS, PUBMED, CINAHL and EMBASE databases from 2006 to 2016. Thirty-eight articles were analyzed, exhibiting 19 leadership evaluation tools; the most used were the Multifactor Leadership Questionnaire, the Global Transformational Leadership Scale, the Leadership Practices Inventory, the Servant Leadership Questionnaire, the Servant Leadership Survey and the Authentic Leadership Questionnaire. The literature search allowed to identify the main theories/styles/models of contemporary leadership and analyze their use in the design of leadership evaluation tools, with the transformational, situational, servant and authentic leadership categories standing out as the most prominent. To a lesser extent, the quantum, charismatic and clinical leadership types were evidenced.
Improving e-book access via a library-developed full-text search tool.
Foust, Jill E; Bergen, Phillip; Maxeiner, Gretchen L; Pawlowski, Peter N
2007-01-01
This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single "Google-style" query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products.
Improving e-book access via a library-developed full-text search tool*
Foust, Jill E.; Bergen, Phillip; Maxeiner, Gretchen L.; Pawlowski, Peter N.
2007-01-01
Purpose: This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. Setting: The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. Brief Description: The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single “Google-style” query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. Results/Evaluation: A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. Conclusion: This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products. PMID:17252065
Medication use in children with asthma: not a child size problem.
Grover, Charu; Armour, Carol; Asperen, Peter Paul Van; Moles, Rebekah; Saini, Bandana
2011-12-01
The global burden of pediatric asthma is high. Governments and health-care systems are affected by the increasing costs of childhood asthma--in terms of direct health-care costs and indirect costs due to loss of parental productivity, missed school days, and hospitalizations. Despite the availability of effective treatment, the current use of medications in children with asthma is suboptimal. The purpose of this review is to scope the empirical literature to identify the problems associated with the use of pediatric asthma medications. The findings will help to design interventions aiming to improve the use of asthma medications among children. A literature search using electronic search engines (i.e., Medline, International Pharmaceutical Abstracts (IPA), PubMed, PsycINFO, and Cumulative Index to Nursing and Allied Health Literature (CINAHL)) and the search terms "asthma," "children," and "medicines" (and derivatives of these keywords) was conducted. The search terms were expanded to include emergent themes arising out of search findings. Content themes relating to parents, children themselves, health-care professionals, organizational systems, and specific medications and devices were found. Within these themes, key issues included a lack of parental knowledge about asthma and asthma medications, lack of information provided to parents, parental beliefs and fears, parental behavioral problems, the high costs of medications and devices, the child's self-image, the need for more child responsibility, physician nonadherence to prescribing guidelines, "off-label" prescribing, poor understanding of teachers, lack of access to educational resources, and specific medications. These key issues should be taken into account when modifying the development of educational tools. These tools should focus on targeting the children themselves, the parent/carers, the health-care professionals, and various organizational systems.
Internet Use for Health-Care Information by Subjects With COPD.
Delgado, Cionéia K; Gazzotti, Mariana R; Santoro, Ilka L; Carvalho, Andrea K; Jardim, José R; Nascimento, Oliver A
2015-09-01
Although the internet is an important tool for entertainment, work, learning, shopping, and communication, it is also a possible source for information on health and disease. The aim of this study was to evaluate the proportion of subjects with COPD in São Paulo, Brazil, who use the internet to obtain information about their disease. Subjects (N = 382) with COPD answered a 17-question survey, including information regarding computer use, internet access, and searching for sites on COPD. Our sample was distributed according to the socioeconomic levels of the Brazilian population (low, 17.8%; medium, 66.5%; and high, 15.7%). Most of the subjects in the sample were male (62.6%), with a mean age of 67.0 ± 9.9 y. According to Global Initiative for Chronic Obstructive Lung Disease (GOLD) stages, 74.3% of the subjects were in stage II or III. In addition, 51.6% of the subjects had a computer, 49.7% accessed the internet, and 13.9% used it to search for information about COPD. The internet was predominantly accessed by male (70.3%) and younger (64.6 ± 9.5 y of age) subjects compared with female (29.7%, P = .04) and older (67.5 ± 9.6 y of age, P < .007) subjects. Searching for information about COPD on the internet was associated with having a computer (5.9-fold), Medical Research Council dyspnea level 1 (5.3-fold), and high social class (8.4-fold). The search for information on COPD was not influenced by GOLD staging. A low percentage of subjects with COPD in São Paulo use the internet as a tool to obtain information about their disease. This search is associated with having a computer, low dyspnea score, and high socioeconomic level. Copyright © 2015 by Daedalus Enterprises.
The potential of genetic algorithms for conceptual design of rotor systems
NASA Technical Reports Server (NTRS)
Crossley, William A.; Wells, Valana L.; Laananen, David H.
1993-01-01
The capabilities of genetic algorithms as a non-calculus based, global search method make them potentially useful in the conceptual design of rotor systems. Coupling reasonably simple analysis tools to the genetic algorithm was accomplished, and the resulting program was used to generate designs for rotor systems to match requirements similar to those of both an existing helicopter and a proposed helicopter design. This provides a comparison with the existing design and also provides insight into the potential of genetic algorithms in design of new rotors.
NASA Astrophysics Data System (ADS)
Morton, J. J.; Ferrini, V. L.
2015-12-01
The Marine Geoscience Data System (MGDS, www.marine-geo.org) operates an interactive digital data repository and metadata catalog that provides access to a variety of marine geology and geophysical data from throughout the global oceans. Its Marine-Geo Digital Library includes common marine geophysical data types and supporting data and metadata, as well as complementary long-tail data. The Digital Library also includes community data collections and custom data portals for the GeoPRISMS, MARGINS and Ridge2000 programs, for active source reflection data (Academic Seismic Portal), and for marine data acquired by the US Antarctic Program (Antarctic and Southern Ocean Data Portal). Ensuring that these data are discoverable not only through our own interfaces but also through standards-compliant web services is critical for enabling investigators to find data of interest.Over the past two years, MGDS has developed several new RESTful web services that enable programmatic access to metadata and data holdings. These web services are compliant with the EarthCube GeoWS Building Blocks specifications and are currently used to drive our own user interfaces. New web applications have also been deployed to provide a more intuitive user experience for searching, accessing and browsing metadata and data. Our new map-based search interface combines components of the Google Maps API with our web services for dynamic searching and exploration of geospatially constrained data sets. Direct introspection of nearly all data formats for hundreds of thousands of data files curated in the Marine-Geo Digital Library has allowed for precise geographic bounds, which allow geographic searches to an extent not previously possible. All MGDS map interfaces utilize the web services of the Global Multi-Resolution Topography (GMRT) synthesis for displaying global basemap imagery and for dynamically provide depth values at the cursor location.
Lew, Charles Chin Han; Yandell, Rosalie; Fraser, Robert J L; Chua, Ai Ping; Chong, Mary Foong Fong; Miller, Michelle
2017-07-01
Malnutrition is associated with poor clinical outcomes among hospitalized patients. However, studies linking malnutrition with poor clinical outcomes in the intensive care unit (ICU) often have conflicting findings due in part to the inappropriate diagnosis of malnutrition. We primarily aimed to determine whether malnutrition diagnosed by validated nutrition assessment tools such as the Subjective Global Assessment (SGA) or Mini Nutritional Assessment (MNA) is independently associated with poorer clinical outcomes in the ICU and if the use of nutrition screening tools demonstrate a similar association. PubMed, CINAHL, Scopus, and Cochrane Library were systematically searched for eligible studies. Search terms included were synonyms of malnutrition, nutritional status, screening, assessment, and intensive care unit. Eligible studies were case-control or cohort studies that recruited adults in the ICU; conducted the SGA, MNA, or used nutrition screening tools before or within 48 hours of ICU admission; and reported the prevalence of malnutrition and relevant clinical outcomes including mortality, length of stay (LOS), and incidence of infection (IOI). Twenty of 1168 studies were eligible. The prevalence of malnutrition ranged from 38% to 78%. Malnutrition diagnosed by nutrition assessments was independently associated with increased ICU LOS, ICU readmission, IOI, and the risk of hospital mortality. The SGA clearly had better predictive validity than the MNA. The association between malnutrition risk determined by nutrition screening was less consistent. Malnutrition is independently associated with poorer clinical outcomes in the ICU. Compared with nutrition assessment tools, the predictive validity of nutrition screening tools were less consistent.
Galehdari, Hamid; Saki, Najmaldin; Mohammadi-Asl, Javad; Rahim, Fakher
2013-01-01
Crigler-Najjar syndrome (CNS) type I and type II are usually inherited as autosomal recessive conditions that result from mutations in the UGT1A1 gene. The main objective of the present review is to summarize results of all available evidence on the accuracy of SNP-based pathogenicity detection tools compared to published clinical result for the prediction of in nsSNPs that leads to disease using prediction performance method. A comprehensive search was performed to find all mutations related to CNS. Database searches included dbSNP, SNPdbe, HGMD, Swissvar, ensemble, and OMIM. All the mutation related to CNS was extracted. The pathogenicity prediction was done using SNP-based pathogenicity detection tools include SIFT, PHD-SNP, PolyPhen2, fathmm, Provean, and Mutpred. Overall, 59 different SNPs related to missense mutations in the UGT1A1 gene, were reviewed. Comparing the diagnostic OR, PolyPhen2 and Mutpred have the highest detection 4.983 (95% CI: 1.24 - 20.02) in both, following by SIFT (diagnostic OR: 3.25, 95% CI: 1.07 - 9.83). The highest MCC of SNP-based pathogenicity detection tools, was belong to SIFT (34.19%) followed by Provean, PolyPhen2, and Mutpred (29.99%, 29.89%, and 29.89%, respectively). Hence the highest SNP-based pathogenicity detection tools ACC, was fit to SIFT (62.71%) followed by PolyPhen2, and Mutpred (61.02%, in both). Our results suggest that some of the well-established SNP-based pathogenicity detection tools can appropriately reflect the role of a disease-associated SNP in both local and global structures.
Wikipedia: a key tool for global public health promotion.
Heilman, James M; Kemmann, Eckhard; Bonert, Michael; Chatterjee, Anwesh; Ragar, Brent; Beards, Graham M; Iberri, David J; Harvey, Matthew; Thomas, Brendan; Stomp, Wouter; Martone, Michael F; Lodge, Daniel J; Vondracek, Andrea; de Wolff, Jacob F; Liber, Casimir; Grover, Samir C; Vickers, Tim J; Meskó, Bertalan; Laurent, Michaël R
2011-01-31
The Internet has become an important health information resource for patients and the general public. Wikipedia, a collaboratively written Web-based encyclopedia, has become the dominant online reference work. It is usually among the top results of search engine queries, including when medical information is sought. Since April 2004, editors have formed a group called WikiProject Medicine to coordinate and discuss the English-language Wikipedia's medical content. This paper, written by members of the WikiProject Medicine, discusses the intricacies, strengths, and weaknesses of Wikipedia as a source of health information and compares it with other medical wikis. Medical professionals, their societies, patient groups, and institutions can help improve Wikipedia's health-related entries. Several examples of partnerships already show that there is enthusiasm to strengthen Wikipedia's biomedical content. Given its unique global reach, we believe its possibilities for use as a tool for worldwide health promotion are underestimated. We invite the medical community to join in editing Wikipedia, with the goal of providing people with free access to reliable, understandable, and up-to-date health information.
Wikipedia: A Key Tool for Global Public Health Promotion
Heilman, James M; Kemmann, Eckhard; Bonert, Michael; Chatterjee, Anwesh; Ragar, Brent; Beards, Graham M; Iberri, David J; Harvey, Matthew; Thomas, Brendan; Stomp, Wouter; Martone, Michael F; Lodge, Daniel J; Vondracek, Andrea; de Wolff, Jacob F; Liber, Casimir; Grover, Samir C; Vickers, Tim J; Meskó, Bertalan
2011-01-01
The Internet has become an important health information resource for patients and the general public. Wikipedia, a collaboratively written Web-based encyclopedia, has become the dominant online reference work. It is usually among the top results of search engine queries, including when medical information is sought. Since April 2004, editors have formed a group called WikiProject Medicine to coordinate and discuss the English-language Wikipedia’s medical content. This paper, written by members of the WikiProject Medicine, discusses the intricacies, strengths, and weaknesses of Wikipedia as a source of health information and compares it with other medical wikis. Medical professionals, their societies, patient groups, and institutions can help improve Wikipedia’s health-related entries. Several examples of partnerships already show that there is enthusiasm to strengthen Wikipedia’s biomedical content. Given its unique global reach, we believe its possibilities for use as a tool for worldwide health promotion are underestimated. We invite the medical community to join in editing Wikipedia, with the goal of providing people with free access to reliable, understandable, and up-to-date health information. PMID:21282098
B-HIT - A Tool for Harvesting and Indexing Biodiversity Data
Barker, Katharine; Braak, Kyle; Cawsey, E. Margaret; Coddington, Jonathan; Robertson, Tim; Whitacre, Jamie
2015-01-01
With the rapidly growing number of data publishers, the process of harvesting and indexing information to offer advanced search and discovery becomes a critical bottleneck in globally distributed primary biodiversity data infrastructures. The Global Biodiversity Information Facility (GBIF) implemented a Harvesting and Indexing Toolkit (HIT), which largely automates data harvesting activities for hundreds of collection and observational data providers. The team of the Botanic Garden and Botanical Museum Berlin-Dahlem has extended this well-established system with a range of additional functions, including improved processing of multiple taxon identifications, the ability to represent associations between specimen and observation units, new data quality control and new reporting capabilities. The open source software B-HIT can be freely installed and used for setting up thematic networks serving the demands of particular user groups. PMID:26544980
B-HIT - A Tool for Harvesting and Indexing Biodiversity Data.
Kelbert, Patricia; Droege, Gabriele; Barker, Katharine; Braak, Kyle; Cawsey, E Margaret; Coddington, Jonathan; Robertson, Tim; Whitacre, Jamie; Güntsch, Anton
2015-01-01
With the rapidly growing number of data publishers, the process of harvesting and indexing information to offer advanced search and discovery becomes a critical bottleneck in globally distributed primary biodiversity data infrastructures. The Global Biodiversity Information Facility (GBIF) implemented a Harvesting and Indexing Toolkit (HIT), which largely automates data harvesting activities for hundreds of collection and observational data providers. The team of the Botanic Garden and Botanical Museum Berlin-Dahlem has extended this well-established system with a range of additional functions, including improved processing of multiple taxon identifications, the ability to represent associations between specimen and observation units, new data quality control and new reporting capabilities. The open source software B-HIT can be freely installed and used for setting up thematic networks serving the demands of particular user groups.
Evidence-based Medicine Search: a customizable federated search engine.
Bracke, Paul J; Howse, David K; Keim, Samuel M
2008-04-01
This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center.
Evidence-based Medicine Search: a customizable federated search engine
Bracke, Paul J.; Howse, David K.; Keim, Samuel M.
2008-01-01
Purpose: This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. Brief Description: The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Outcomes/Conclusion: Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center. PMID:18379665
Electronic Biomedical Literature Search for Budding Researcher
Thakre, Subhash B.; Thakre S, Sushama S.; Thakre, Amol D.
2013-01-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research. PMID:24179937
Electronic biomedical literature search for budding researcher.
Thakre, Subhash B; Thakre S, Sushama S; Thakre, Amol D
2013-09-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research.
Meta-heuristic algorithms as tools for hydrological science
NASA Astrophysics Data System (ADS)
Yoo, Do Guen; Kim, Joong Hoon
2014-12-01
In this paper, meta-heuristic optimization techniques are introduced and their applications to water resources engineering, particularly in hydrological science are introduced. In recent years, meta-heuristic optimization techniques have been introduced that can overcome the problems inherent in iterative simulations. These methods are able to find good solutions and require limited computation time and memory use without requiring complex derivatives. Simulation-based meta-heuristic methods such as Genetic algorithms (GAs) and Harmony Search (HS) have powerful searching abilities, which can occasionally overcome the several drawbacks of traditional mathematical methods. For example, HS algorithms can be conceptualized from a musical performance process and used to achieve better harmony; such optimization algorithms seek a near global optimum determined by the value of an objective function, providing a more robust determination of musical performance than can be achieved through typical aesthetic estimation. In this paper, meta-heuristic algorithms and their applications (focus on GAs and HS) in hydrological science are discussed by subject, including a review of existing literature in the field. Then, recent trends in optimization are presented and a relatively new technique such as Smallest Small World Cellular Harmony Search (SSWCHS) is briefly introduced, with a summary of promising results obtained in previous studies. As a result, previous studies have demonstrated that meta-heuristic algorithms are effective tools for the development of hydrological models and the management of water resources.
A Fast, Minimalist Search Tool for Remote Sensing Data
NASA Astrophysics Data System (ADS)
Lynnes, C. S.; Macharrie, P. G.; Elkins, M.; Joshi, T.; Fenichel, L. H.
2005-12-01
We present a tool that emphasizes speed and simplicity in searching remotely sensed Earth Science data. The tool, nicknamed "Mirador" (Spanish for a scenic overlook), provides only four freetext search form fields, for Keywords, Location, Data Start and Data Stop. This contrasts with many current Earth Science search tools that offer highly structured interfaces in order to ensure precise, non-zero results. The disadvantages of the structured approach lie in its complexity and resultant learning curve, as well as the time it takes to formulate and execute the search, thus discouraging iterative discovery. On the other hand, the success of the basic Google search interface shows that many users are willing to forgo high search precision if the search process is fast enough to enable rapid iteration. Therefore, we employ several methods to increase the speed of search formulation and execution. Search formulation is expedited by the minimalist search form, with only one required field. Also, a gazetteer enables the use of geographic terms as shorthand for latitude/longitude coordinates. The search execution is accelerated by initially presenting dataset results (returned from a Google Mini appliance) with an estimated number of "hits" for each dataset based on the user's space-time constraints. The more costly file-level search is executed against a PostGres database only when the user "drills down", and then covering only the fraction of the time period needed to return the next page of results. The simplicity of the search form makes the tool easy to learn and use, and the speed of the searches enables an iterative form of data discovery.
Shum, Jessica; Poureslami, Iraj; Doyle-Waters, Mary M; FitzGerald, J Mark
2016-06-07
The term "health literacy" (HL) was first coined in 1974, and its most common definition is currently defined as a person's ability to access, understand, evaluate, communicate, and use health information to make decisions for one's health. The previous systematic reviews assessing the effect of existing HL measurement tools on health outcomes have simply searched for the term "health literacy" only to identify measures instead of incorporating either one or more of the five domains in their search. Furthermore, as the domain "use" is fairly new, few studies have actually assessed this domain. In this protocol, we propose to identify and assess HL measures that applied the mentioned five domains either collectively or individually in assessing chronic disease management, in particular for asthma and chronic obstructive pulmonary disease (COPD). The ultimate goal is to provide recommendations towards the development and validation of a patient-centric HL measurement tool for the two diseases. A comprehensive, electronic search will be conducted to identify potential studies dating from 1974 to 2016 from databases such as Embase, MEDLINE, CINAHL, Cochrane Central Register of Controlled Trials, Web of Science, ERIC, PsycINFO, and HAPI. Database searches will be complemented with grey literature. Two independent reviewers will perform tool selection, study selection, data extraction, and quality assessment using pre-designed study forms. Any disagreement will be resolved through discussion or a third reviewer. Only studies that have developed or validated HL measurement tools (including one or more of the five domains mentioned above) among asthma and COPD patients will be included. Information collected from the studies will include instrument details such as versions, purpose, underlying constructs, administration, mapping of items onto the five domains, internal structure, scoring, response processes, standard error of measurement (SEM), correlation with other variables, clinically important difference, and item response theory (IRT)-based analyses. The identified strengths and weaknesses as well as reliability, validity, responsiveness, and interpretability of the tools from the validation studies will also be assessed using the COSMIN checklist. A synthesis will be presented for all tools in relation to asthma and COPD management. This systematic review will be one of several key contributions central to a global evidence-based strategy funded by the Canadian Institutes of Health Research (CIHR) for measuring HL in patients with asthma and COPD, highlighting the gaps and inconsistencies of domains between existing tools. The knowledge generated from this review will provide the team information on (1) the five-domain model and cross domains, (2) underlying constructs, (3) tool length, (4) time for completion, (5) reading level, and (6) format for development of the proposed tool. Other aspects of the published validation studies such as reliability coefficients, SEM, correlations with other variables, clinically important difference, and IRT-based analyses will be important for comparison purposes when testing, interpreting, and validating the developed tool. PROSPERO CRD42016037532.
Kraak, Vivica I; Harrigan, Paige B; Lawrence, Mark; Harrison, Paul J; Jackson, Michaela A; Swinburn, Boyd
2012-03-01
Transnational food, beverage and restaurant companies, and their corporate foundations, may be potential collaborators to help address complex public health nutrition challenges. While UN system guidelines are available for private-sector engagement, non-governmental organizations (NGO) have limited guidelines to navigate diverse opportunities and challenges presented by partnering with these companies through public-private partnerships (PPP) to address the global double burden of malnutrition. We conducted a search of electronic databases, UN system websites and grey literature to identify resources about partnerships used to address the global double burden of malnutrition. A narrative summary provides a synthesis of the interdisciplinary literature identified. We describe partnership opportunities, benefits and challenges; and tools and approaches to help NGO engage with the private sector to address global public health nutrition challenges. PPP benefits include: raising the visibility of nutrition and health on policy agendas; mobilizing funds and advocating for research; strengthening food-system processes and delivery systems; facilitating technology transfer; and expanding access to medications, vaccines, healthy food and beverage products, and nutrition assistance during humanitarian crises. PPP challenges include: balancing private commercial interests with public health interests; managing conflicts of interest; ensuring that co-branded activities support healthy products and healthy eating environments; complying with ethical codes of conduct; assessing partnership compatibility; and evaluating partnership outcomes. NGO should adopt a systematic and transparent approach using available tools and processes to maximize benefits and minimize risks of partnering with transnational food, beverage and restaurant companies to effectively target the global double burden of malnutrition.
Ringed Seal Search for Global Optimization via a Sensitive Search Model
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems. PMID:26790131
Accessing eSDO Solar Image Processing and Visualization through AstroGrid
NASA Astrophysics Data System (ADS)
Auden, E.; Dalla, S.
2008-08-01
The eSDO project is funded by the UK's Science and Technology Facilities Council (STFC) to integrate Solar Dynamics Observatory (SDO) data, algorithms, and visualization tools with the UK's Virtual Observatory project, AstroGrid. In preparation for the SDO launch in January 2009, the eSDO team has developed nine algorithms covering coronal behaviour, feature recognition, and global / local helioseismology. Each of these algorithms has been deployed as an AstroGrid Common Execution Architecture (CEA) application so that they can be included in complex VO workflows. In addition, the PLASTIC-enabled eSDO "Streaming Tool" online movie application allows users to search multi-instrument solar archives through AstroGrid web services and visualise the image data through galleries, an interactive movie viewing applet, and QuickTime movies generated on-the-fly.
Challenging Google, Microsoft Unveils a Search Tool for Scholarly Articles
ERIC Educational Resources Information Center
Carlson, Scott
2006-01-01
Microsoft has introduced a new search tool to help people find scholarly articles online. The service, which includes journal articles from prominent academic societies and publishers, puts Microsoft in direct competition with Google Scholar. The new free search tool, which should work on most Web browsers, is called Windows Live Academic Search…
Web Usage Mining Analysis of Federated Search Tools for Egyptian Scholars
ERIC Educational Resources Information Center
Mohamed, Khaled A.; Hassan, Ahmed
2008-01-01
Purpose: This paper aims to examine the behaviour of the Egyptian scholars while accessing electronic resources through two federated search tools. The main purpose of this article is to provide guidance for federated search tool technicians and support teams about user issues, including the need for training. Design/methodology/approach: Log…
NASA Astrophysics Data System (ADS)
Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.
2015-08-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
Supervised learning of tools for content-based search of image databases
NASA Astrophysics Data System (ADS)
Delanoy, Richard L.
1996-03-01
A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.
Achieving Sub-Second Search in the CMR
NASA Astrophysics Data System (ADS)
Gilman, J.; Baynes, K.; Pilone, D.; Mitchell, A. E.; Murphy, K. J.
2014-12-01
The Common Metadata Repository (CMR) is the next generation Earth Science Metadata catalog for NASA's Earth Observing data. It joins together the holdings from the EOS Clearing House (ECHO) and the Global Change Master Directory (GCMD), creating a unified, authoritative source for EOSDIS metadata. The CMR allows ingest in many different formats while providing consistent search behavior and retrieval in any supported format. Performance is a critical component of the CMR, ensuring improved data discovery and client interactivity. The CMR delivers sub-second search performance for any of the common query conditions (including spatial) across hundreds of millions of metadata granules. It also allows the addition of new metadata concepts such as visualizations, parameter metadata, and documentation. The CMR's goals presented many challenges. This talk will describe the CMR architecture, design, and innovations that were made to achieve its goals. This includes: * Architectural features like immutability and backpressure. * Data management techniques such as caching and parallel loading that give big performance gains. * Open Source and COTS tools like Elasticsearch search engine. * Adoption of Clojure, a functional programming language for the Java Virtual Machine. * Development of a custom spatial search plugin for Elasticsearch and why it was necessary. * Introduction of a unified model for metadata that maps every supported metadata format to a consistent domain model.
OSTI.GOV | OSTI, US Dept of Energy Office of Scientific and Technical
Information Skip to main content â° Submit Research Results Search Tools Public Access Policy Data Services & Dev Tools About FAQs News Sign In Create Account Sign In Create Account Department Information Search terms: Advanced search options Advanced Search OptionsAdvanced Search queries use a
The Theory of Planned Behaviour Applied to Search Engines as a Learning Tool
ERIC Educational Resources Information Center
Liaw, Shu-Sheng
2004-01-01
Search engines have been developed for helping learners to seek online information. Based on theory of planned behaviour approach, this research intends to investigate the behaviour of using search engines as a learning tool. After factor analysis, the results suggest that perceived satisfaction of search engine, search engines as an information…
Nicholson, Scott
2005-01-01
The paper explores the current state of generalist search education in library schools and considers that foundation in respect to the Medical Library Association's statement on expert searching. Syllabi from courses with significant searching components were examined from ten of the top library schools, as determined by the U.S. News & World Report rankings. Mixed methods were used, but primarily quantitative bibliometric methods were used. The educational focus in these searching components was on understanding the generalist searching resources and typical users and on performing a reflective search through application of search strategies, controlled vocabulary, and logic appropriate to the search tool. There is a growing emphasis on Web-based search tools and a movement away from traditional set-based searching and toward free-text search strategies. While a core set of authors is used in these courses, no core set of readings is used. While library schools provide a strong foundation, future medical librarians still need to take courses that introduce them to the resources, settings, and users associated with medical libraries. In addition, as more emphasis is placed on Web-based search tools and free-text searching, instructors of the specialist medical informatics courses will need to focus on teaching traditional search methods appropriate for common tools in the medical domain.
Nicholson, Scott
2005-01-01
Purpose: The paper explores the current state of generalist search education in library schools and considers that foundation in respect to the Medical Library Association's statement on expert searching. Setting/Subjects: Syllabi from courses with significant searching components were examined from ten of the top library schools, as determined by the U.S. News & World Report rankings. Methodology: Mixed methods were used, but primarily quantitative bibliometric methods were used. Results: The educational focus in these searching components was on understanding the generalist searching resources and typical users and on performing a reflective search through application of search strategies, controlled vocabulary, and logic appropriate to the search tool. There is a growing emphasis on Web-based search tools and a movement away from traditional set-based searching and toward free-text search strategies. While a core set of authors is used in these courses, no core set of readings is used. Discussion/Conclusion: While library schools provide a strong foundation, future medical librarians still need to take courses that introduce them to the resources, settings, and users associated with medical libraries. In addition, as more emphasis is placed on Web-based search tools and free-text searching, instructors of the specialist medical informatics courses will need to focus on teaching traditional search methods appropriate for common tools in the medical domain. PMID:15685276
Collinearity Impairs Local Element Visual Search
ERIC Educational Resources Information Center
Jingling, Li; Tseng, Chia-Huei
2013-01-01
In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…
Health literacy and usability of clinical trial search engines.
Utami, Dina; Bickmore, Timothy W; Barry, Barbara; Paasche-Orlow, Michael K
2014-01-01
Several web-based search engines have been developed to assist individuals to find clinical trials for which they may be interested in volunteering. However, these search engines may be difficult for individuals with low health and computer literacy to navigate. The authors present findings from a usability evaluation of clinical trial search tools with 41 participants across the health and computer literacy spectrum. The study consisted of 3 parts: (a) a usability study of an existing web-based clinical trial search tool; (b) a usability study of a keyword-based clinical trial search tool; and (c) an exploratory study investigating users' information needs when deciding among 2 or more candidate clinical trials. From the first 2 studies, the authors found that users with low health literacy have difficulty forming queries using keywords and have significantly more difficulty using a standard web-based clinical trial search tool compared with users with adequate health literacy. From the third study, the authors identified the search factors most important to individuals searching for clinical trials and how these varied by health literacy level.
Multi-fidelity and multi-disciplinary design optimization of supersonic business jets
NASA Astrophysics Data System (ADS)
Choi, Seongim
Supersonic jets have been drawing great attention after the end of service for the Concorde was announced on April of 2003. It is believed, however, that civilian supersonic aircraft may make a viable return in the business jet market. This thesis focuses on the design optimization of feasible supersonic business jet configurations. Preliminary design techniques for mitigation of ground sonic boom are investigated while ensuring that all relevant disciplinary constraints are satisfied (including aerodynamic performance, propulsion, stability & control and structures.) In order to achieve reasonable confidence in the resulting designs, high-fidelity simulations are required, making the entire design process both expensive and complex. In order to minimize the computational cost, surrogate/approximate models are constructed using a hierarchy of different fidelity analysis tools including PASS, A502/Panair and Euler/NS codes. Direct search methods such as Genetic Algorithms (GAs) and a nonlinear SIMPLEX are employed to designs in searches of large and noisy design spaces. A local gradient-based search method can be combined with these global search methods for small modifications of candidate optimum designs. The Mesh Adaptive Direct Search (MADS) method can also be used to explore the design space using a solution-adaptive grid refinement approach. These hybrid approaches, both in search methodology and surrogate model construction, are shown to result in designs with reductions in sonic boom and improved aerodynamic performance.
The effects of applying information technology on job empowerment dimensions.
Ajami, Sima; Arab-Chadegani, Raziyeh
2014-01-01
Information Technology (IT) is known as a valuable tool for information dissemination. Today, information communication technology can be used as a powerful tool to improve employees' quality and efficiency. The increasing development of technology-based tools and their adaptation speed with human requirements has led to a new form of the learning environment and creative, active and inclusive interaction. These days, information is one of the most important power resources in every organization and accordingly, acquiring information, especially central or strategic one can help organizations to build a power base and influence others. The aim of this study was to identify the most important criteria in job empowerment using IT and also the advantages of assessing empowerment. This study was a narrative review. The literature was searched on databases and journals of Springer, Proquest, PubMed, science direct and scientific information database) with keywords including IT, empowerment and employees in the searching areas of titles, keywords, abstracts and full texts. The preliminary search resulted in 85 articles, books and conference proceedings in which published between 1983 and 2013 during July 2013. After a careful analysis of the content of each paper, a total of 40 papers and books were selected based on their relevancy. According to Ardalan Model IT plays a significant role in the fast data collection, global and fast access to a broad range of health information, a quick evaluation of information, better communication among health experts and more awareness through access to various information sources. IT leads to a better performance accompanied by higher efficiency in service providing all of which will cause more satisfaction from fast and high-quality services.
PIPEMicroDB: microsatellite database and primer generation tool for pigeonpea genome
Sarika; Arora, Vasu; Iquebal, M. A.; Rai, Anil; Kumar, Dinesh
2013-01-01
Molecular markers play a significant role for crop improvement in desirable characteristics, such as high yield, resistance to disease and others that will benefit the crop in long term. Pigeonpea (Cajanus cajan L.) is the recently sequenced legume by global consortium led by ICRISAT (Hyderabad, India) and been analysed for gene prediction, synteny maps, markers, etc. We present PIgeonPEa Microsatellite DataBase (PIPEMicroDB) with an automated primer designing tool for pigeonpea genome, based on chromosome wise as well as location wise search of primers. Total of 123 387 Short Tandem Repeats (STRs) were extracted from pigeonpea genome, available in public domain using MIcroSAtellite tool (MISA). The database is an online relational database based on ‘three-tier architecture’ that catalogues information of microsatellites in MySQL and user-friendly interface is developed using PHP. Search for STRs may be customized by limiting their location on chromosome as well as number of markers in that range. This is a novel approach and is not been implemented in any of the existing marker database. This database has been further appended with Primer3 for primer designing of selected markers with left and right flankings of size up to 500 bp. This will enable researchers to select markers of choice at desired interval over the chromosome. Furthermore, one can use individual STRs of a targeted region over chromosome to narrow down location of gene of interest or linked Quantitative Trait Loci (QTLs). Although it is an in silico approach, markers’ search based on characteristics and location of STRs is expected to be beneficial for researchers. Database URL: http://cabindb.iasri.res.in/pigeonpea/ PMID:23396298
PIPEMicroDB: microsatellite database and primer generation tool for pigeonpea genome.
Sarika; Arora, Vasu; Iquebal, M A; Rai, Anil; Kumar, Dinesh
2013-01-01
Molecular markers play a significant role for crop improvement in desirable characteristics, such as high yield, resistance to disease and others that will benefit the crop in long term. Pigeonpea (Cajanus cajan L.) is the recently sequenced legume by global consortium led by ICRISAT (Hyderabad, India) and been analysed for gene prediction, synteny maps, markers, etc. We present PIgeonPEa Microsatellite DataBase (PIPEMicroDB) with an automated primer designing tool for pigeonpea genome, based on chromosome wise as well as location wise search of primers. Total of 123 387 Short Tandem Repeats (STRs) were extracted from pigeonpea genome, available in public domain using MIcroSAtellite tool (MISA). The database is an online relational database based on 'three-tier architecture' that catalogues information of microsatellites in MySQL and user-friendly interface is developed using PHP. Search for STRs may be customized by limiting their location on chromosome as well as number of markers in that range. This is a novel approach and is not been implemented in any of the existing marker database. This database has been further appended with Primer3 for primer designing of selected markers with left and right flankings of size up to 500 bp. This will enable researchers to select markers of choice at desired interval over the chromosome. Furthermore, one can use individual STRs of a targeted region over chromosome to narrow down location of gene of interest or linked Quantitative Trait Loci (QTLs). Although it is an in silico approach, markers' search based on characteristics and location of STRs is expected to be beneficial for researchers. Database URL: http://cabindb.iasri.res.in/pigeonpea/
The effects of applying information technology on job empowerment dimensions
Ajami, Sima; Arab-Chadegani, Raziyeh
2014-01-01
Information Technology (IT) is known as a valuable tool for information dissemination. Today, information communication technology can be used as a powerful tool to improve employees’ quality and efficiency. The increasing development of technology-based tools and their adaptation speed with human requirements has led to a new form of the learning environment and creative, active and inclusive interaction. These days, information is one of the most important power resources in every organization and accordingly, acquiring information, especially central or strategic one can help organizations to build a power base and influence others. The aim of this study was to identify the most important criteria in job empowerment using IT and also the advantages of assessing empowerment. This study was a narrative review. The literature was searched on databases and journals of Springer, Proquest, PubMed, science direct and scientific information database) with keywords including IT, empowerment and employees in the searching areas of titles, keywords, abstracts and full texts. The preliminary search resulted in 85 articles, books and conference proceedings in which published between 1983 and 2013 during July 2013. After a careful analysis of the content of each paper, a total of 40 papers and books were selected based on their relevancy. According to Ardalan Model IT plays a significant role in the fast data collection, global and fast access to a broad range of health information, a quick evaluation of information, better communication among health experts and more awareness through access to various information sources. IT leads to a better performance accompanied by higher efficiency in service providing all of which will cause more satisfaction from fast and high-quality services. PMID:25250350
Chang, Suhua; Zhang, Jiajie; Liao, Xiaoyun; Zhu, Xinxing; Wang, Dahai; Zhu, Jiang; Feng, Tao; Zhu, Baoli; Gao, George F; Wang, Jian; Yang, Huanming; Yu, Jun; Wang, Jing
2007-01-01
Frequent outbreaks of highly pathogenic avian influenza and the increasing data available for comparative analysis require a central database specialized in influenza viruses (IVs). We have established the Influenza Virus Database (IVDB) to integrate information and create an analysis platform for genetic, genomic, and phylogenetic studies of the virus. IVDB hosts complete genome sequences of influenza A virus generated by Beijing Institute of Genomics (BIG) and curates all other published IV sequences after expert annotation. Our Q-Filter system classifies and ranks all nucleotide sequences into seven categories according to sequence content and integrity. IVDB provides a series of tools and viewers for comparative analysis of the viral genomes, genes, genetic polymorphisms and phylogenetic relationships. A search system has been developed for users to retrieve a combination of different data types by setting search options. To facilitate analysis of global viral transmission and evolution, the IV Sequence Distribution Tool (IVDT) has been developed to display the worldwide geographic distribution of chosen viral genotypes and to couple genomic data with epidemiological data. The BLAST, multiple sequence alignment and phylogenetic analysis tools were integrated for online data analysis. Furthermore, IVDB offers instant access to pre-computed alignments and polymorphisms of IV genes and proteins, and presents the results as SNP distribution plots and minor allele distributions. IVDB is publicly available at http://influenza.genomics.org.cn.
Nove, Andrea; Cometto, Giorgio; Campbell, James
2017-11-09
In their adoption of WHA resolution 69.19, World Health Organization Member States requested all bilateral and multilateral initiatives to conduct impact assessments of their funding to human resources for health. The High-Level Commission for Health Employment and Economic Growth similarly proposed that official development assistance for health, education, employment and gender are best aligned to creating decent jobs in the health and social workforce. No standard tools exist for assessing the impact of global health initiatives on the health workforce, but tools exist from other fields. The objectives of this paper are to describe how a review of grey literature informed the development of a draft health workforce impact assessment tool and to introduce the tool. A search of grey literature yielded 72 examples of impact assessment tools and guidance from a wide variety of fields including gender, health and human rights. These examples were reviewed, and information relevant to the development of a health workforce impact assessment was extracted from them using an inductive process. A number of good practice principles were identified from the review. These informed the development of a draft health workforce impact assessment tool, based on an established health labour market framework. The tool is designed to be applied before implementation. It consists of a relatively short and focused screening module to be applied to all relevant initiatives, followed by a more in-depth assessment to be applied only to initiatives for which the screening module indicates that significant implications for HRH are anticipated. It thus aims to strike a balance between maximising rigour and minimising administrative burden. The application of the new tool will help to ensure that health workforce implications are incorporated into global health decision-making processes from the outset and to enhance positive HRH impacts and avoid, minimise or offset negative impacts.
NASA Technical Reports Server (NTRS)
Bickham, Grandin; Saile, Lynn; Havelka, Jacque; Fitts, Mary
2011-01-01
Introduction: Johnson Space Center (JSC) offers two extensive libraries that contain journals, research literature and electronic resources. Searching capabilities are available to those individuals residing onsite or through a librarian s search. Many individuals have rich collections of references, but no mechanisms to share reference libraries across researchers, projects, or directorates exist. Likewise, information regarding which references are provided to which individuals is not available, resulting in duplicate requests, redundant labor costs and associated copying fees. In addition, this tends to limit collaboration between colleagues and promotes the establishment of individual, unshared silos of information The Integrated Medical Model (IMM) team has utilized a centralized reference management tool during the development, test, and operational phases of this project. The Enterprise Reference Library project expands the capabilities developed for IMM to address the above issues and enhance collaboration across JSC. Method: After significant market analysis for a multi-user reference management tool, no available commercial tool was found to meet this need, so a software program was built around a commercial tool, Reference Manager 12 by The Thomson Corporation. A use case approach guided the requirements development phase. The premise of the design is that individuals use their own reference management software and export to SharePoint when their library is incorporated into the Enterprise Reference Library. This results in a searchable user-specific library application. An accompanying share folder will warehouse the electronic full-text articles, which allows the global user community to access full -text articles. Discussion: An enterprise reference library solution can provide a multidisciplinary collection of full text articles. This approach improves efficiency in obtaining and storing reference material while greatly reducing labor, purchasing and duplication costs. Most importantly, increasing collaboration across research groups provides unprecedented access to information relevant to NASA s mission. Conclusion: This project is an expansion and cost-effective leveraging of the existing JSC centralized library. Adding key word and author search capabilities and an alert function for notifications about new articles, based on users profiles, represent examples of future enhancements.
New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM
NASA Astrophysics Data System (ADS)
Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.
2017-12-01
Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.
A CT-based software tool for evaluating compensator quality in passively scattered proton therapy
NASA Astrophysics Data System (ADS)
Li, Heng; Zhang, Lifei; Dong, Lei; Sahoo, Narayan; Gillin, Michael T.; Zhu, X. Ronald
2010-11-01
We have developed a quantitative computed tomography (CT)-based quality assurance (QA) tool for evaluating the accuracy of manufactured compensators used in passively scattered proton therapy. The thickness of a manufactured compensator was measured from its CT images and compared with the planned thickness defined by the treatment planning system. The difference between the measured and planned thicknesses was calculated with use of the Euclidean distance transformation and the kd-tree search method. Compensator accuracy was evaluated by examining several parameters including mean distance, maximum distance, global thickness error and central axis shifts. Two rectangular phantoms were used to validate the performance of the QA tool. Nine patients and 20 compensators were included in this study. We found that mean distances, global thickness errors and central axis shifts were all within 1 mm for all compensators studied, with maximum distances ranging from 1.1 to 3.8 mm. Although all compensators passed manual verification at selected points, about 5% of the pixels still had maximum distances of >2 mm, most of which correlated with large depth gradients. The correlation between the mean depth gradient of the compensator and the percentage of pixels with mean distance <1 mm is -0.93 with p < 0.001, which suggests that the mean depth gradient is a good indicator of compensator complexity. These results demonstrate that the CT-based compensator QA tool can be used to quantitatively evaluate manufactured compensators.
Information Discovery and Retrieval Tools
2004-12-01
information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.
Information Discovery and Retrieval Tools
2003-04-01
information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.
Government Search Tools: Evaluating Fee and Free Search Alternatives.
ERIC Educational Resources Information Center
Gordon-Murnane, Laura
1999-01-01
Examines four tools that provide access to federal government information: FedWorld, Usgovsearch.com, Google/Unclesam, and GovBot. Compares search features, size of collection, ease of use, and cost or subscription requirements. (LRW)
Visual Search in ASD: Instructed versus Spontaneous Local and Global Processing
ERIC Educational Resources Information Center
Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2016-01-01
Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual…
Global Statistical Learning in a Visual Search Task
ERIC Educational Resources Information Center
Jones, John L.; Kaschak, Michael P.
2012-01-01
Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…
Clustering methods for the optimization of atomic cluster structure
NASA Astrophysics Data System (ADS)
Bagattini, Francesco; Schoen, Fabio; Tigli, Luca
2018-04-01
In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
Parallel Computational Protein Design.
Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang
2017-01-01
Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.
NASA Astrophysics Data System (ADS)
Kim, Woojin; Boonn, William
2010-03-01
Data mining of existing radiology and pathology reports within an enterprise health system can be used for clinical decision support, research, education, as well as operational analyses. In our health system, the database of radiology and pathology reports exceeds 13 million entries combined. We are building a web-based tool to allow search and data analysis of these combined databases using freely available and open source tools. This presentation will compare performance of an open source full-text indexing tool to MySQL's full-text indexing and searching and describe implementation procedures to incorporate these capabilities into a radiology-pathology search engine.
Hu, Qiyue; Peng, Zhengwei; Kostrowicki, Jaroslav; Kuki, Atsuo
2011-01-01
Pfizer Global Virtual Library (PGVL) of 10(13) readily synthesizable molecules offers a tremendous opportunity for lead optimization and scaffold hopping in drug discovery projects. However, mining into a chemical space of this size presents a challenge for the concomitant design informatics due to the fact that standard molecular similarity searches against a collection of explicit molecules cannot be utilized, since no chemical information system could create and manage more than 10(8) explicit molecules. Nevertheless, by accepting a tolerable level of false negatives in search results, we were able to bypass the need for full 10(13) enumeration and enabled the efficient similarity search and retrieval into this huge chemical space for practical usage by medicinal chemists. In this report, two search methods (LEAP1 and LEAP2) are presented. The first method uses PGVL reaction knowledge to disassemble the incoming search query molecule into a set of reactants and then uses reactant-level similarities into actual available starting materials to focus on a much smaller sub-region of the full virtual library compound space. This sub-region is then explicitly enumerated and searched via a standard similarity method using the original query molecule. The second method uses a fuzzy mapping onto candidate reactions and does not require exact disassembly of the incoming query molecule. Instead Basis Products (or capped reactants) are mapped into the query molecule and the resultant asymmetric similarity scores are used to prioritize the corresponding reactions and reactant sets. All sets of Basis Products are inherently indexed to specific reactions and specific starting materials. This again allows focusing on a much smaller sub-region for explicit enumeration and subsequent standard product-level similarity search. A set of validation studies were conducted. The results have shown that the level of false negatives for the disassembly-based method is acceptable when the query molecule can be recognized for exact disassembly, and the fuzzy reaction mapping method based on Basis Products has an even better performance in terms of lower false-negative rate because it is not limited by the requirement that the query molecule needs to be recognized by any disassembly algorithm. Both search methods have been implemented and accessed through a powerful desktop molecular design tool (see ref. (33) for details). The chapter will end with a comparison of published search methods against large virtual chemical space.
Pain assessment tools: is the content appropriate for use in palliative care?
Hølen, Jacob Chr; Hjermstad, Marianne Jensen; Loge, Jon Håvard; Fayers, Peter M; Caraceni, Augusto; De Conno, Franco; Forbes, Karen; Fürst, Carl Johan; Radbruch, Lukas; Kaasa, Stein
2006-12-01
Inadequate pain assessment prevents optimal treatment in palliative care. The content of pain assessment tools might limit their usefulness for proper pain assessment, but data on the content validity of the tools are scarce. The objective of this study was to examine the content of the existing pain assessment tools, and to evaluate the appropriateness of different dimensions and items for pain assessment in palliative care. A systematic search was performed to find pain assessment tools for patients with advanced cancer who were receiving palliative care. An ad hoc search with broader search criteria supplemented the systematic search. The items of the identified tools were allocated to appropriate dimensions. This was reviewed by an international panel of experts, who also evaluated the relevance of the different dimensions for pain assessment in palliative care. The systematic literature search generated 16 assessment tools while the ad hoc search generated 64. Ten pain dimensions containing 1,011 pain items were identified by the experts. The experts ranked intensity, temporal pattern, treatment and exacerbating/relieving factors, location, and interference with health-related quality of life as the most important dimensions. None of the assessment tools covered these dimensions satisfactorily. Most items were related to interference (231) and intensity (138). Temporal pattern (which includes breakthrough pain), ranked as the second most important dimension, was covered by 29 items only. Many tools include dimensions and items of limited relevance for patients with advanced cancer. This might reduce compliance and threaten the validity of the assessment. New tools should reflect the clinical relevance of different dimensions and be user-friendly.
CRCDA—Comprehensive resources for cancer NGS data analysis
Thangam, Manonanthini; Gopal, Ramesh Kumar
2015-01-01
Next generation sequencing (NGS) innovations put a compelling landmark in life science and changed the direction of research in clinical oncology with its productivity to diagnose and treat cancer. The aim of our portal comprehensive resources for cancer NGS data analysis (CRCDA) is to provide a collection of different NGS tools and pipelines under diverse classes with cancer pathways and databases and furthermore, literature information from PubMed. The literature data was constrained to 18 most common cancer types such as breast cancer, colon cancer and other cancers that exhibit in worldwide population. NGS-cancer tools for the convenience have been categorized into cancer genomics, cancer transcriptomics, cancer epigenomics, quality control and visualization. Pipelines for variant detection, quality control and data analysis were listed to provide out-of-the box solution for NGS data analysis, which may help researchers to overcome challenges in selecting and configuring individual tools for analysing exome, whole genome and transcriptome data. An extensive search page was developed that can be queried by using (i) type of data [literature, gene data and sequence read archive (SRA) data] and (ii) type of cancer (selected based on global incidence and accessibility of data). For each category of analysis, variety of tools are available and the biggest challenge is in searching and using the right tool for the right application. The objective of the work is collecting tools in each category available at various places and arranging the tools and other data in a simple and user-friendly manner for biologists and oncologists to find information easier. To the best of our knowledge, we have collected and presented a comprehensive package of most of the resources available in cancer for NGS data analysis. Given these factors, we believe that this website will be an useful resource to the NGS research community working on cancer. Database URL: http://bioinfo.au-kbc.org.in/ngs/ngshome.html. PMID:26450948
Peterson, Elena S; McCue, Lee Ann; Schrimpe-Rutledge, Alexandra C; Jensen, Jeffrey L; Walker, Hyunjoo; Kobold, Markus A; Webb, Samantha R; Payne, Samuel H; Ansong, Charles; Adkins, Joshua N; Cannon, William R; Webb-Robertson, Bobbie-Jo M
2012-04-05
The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php.
2012-01-01
Background The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. Results VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. Conclusions VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php. PMID:22480257
An experimental study of search in global social networks.
Dodds, Peter Sheridan; Muhamad, Roby; Watts, Duncan J
2003-08-08
We report on a global social-search experiment in which more than 60,000 e-mail users attempted to reach one of 18 target persons in 13 countries by forwarding messages to acquaintances. We find that successful social search is conducted primarily through intermediate to weak strength ties, does not require highly connected "hubs" to succeed, and, in contrast to unsuccessful social search, disproportionately relies on professional relationships. By accounting for the attrition of message chains, we estimate that social searches can reach their targets in a median of five to seven steps, depending on the separation of source and target, although small variations in chain lengths and participation rates generate large differences in target reachability. We conclude that although global social networks are, in principle, searchable, actual success depends sensitively on individual incentives.
Galileo Teacher Training Program - GTTP Days
NASA Astrophysics Data System (ADS)
Heenatigala, T.; Doran, R.
2012-09-01
Despite the vast availability of teaching resources on the internet, finding a quality and user-friendly materials is a challenge. Many teachers are not trained with proper computing skills to search for the right materials. With years of expertise training teachers globally, Galileo Teacher Training Program (GTTP) [1] recognize the need of having a go-to place for teachers to access resources. To fill this need GTTP developed - GTTP Days - a program creating resource guides for planetary, lunar and solar fields. Avoiding the imbalance in science resources between the developed and undeveloped world, GTTP Days is available both online and offline as a printable version. Each resource guide covers areas such as scientific knowledge, exploration, observation, photography, art & culture and web tools. The lesson plans of each guide include hands-on activities, web tools, software tools, and activities for people with disabilities [2]. Each activity indicate the concepts used, the skills required and age level which guides the teachers and educators to select the correct content suitable for local curriculum.
Protein structural similarity search by Ramachandran codes
Lo, Wei-Cheng; Huang, Po-Jung; Chang, Chih-Hung; Lyu, Ping-Chiang
2007-01-01
Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation). SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE) and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era. PMID:17716377
Systematic review of surveillance by social media platforms for illicit drug use.
Kazemi, Donna M; Borsari, Brian; Levine, Maureen J; Dooley, Beau
2017-12-01
The use of social media (SM) as a surveillance tool of global illicit drug use is limited. To address this limitation, a systematic review of literature focused on the ability of SM to better recognize illicit drug use trends was addressed. A search was conducted in databases: PubMed, CINAHL via Ebsco, PsychINFO via Ebsco, Medline via Ebsco, ERIC, Cochrane Library, Science Direct, ABI/INFORM Complete and Communication and Mass Media Complete. Included studies were original research published in peer-reviewed journals between January 2005 and June 2015 that primarily focused on collecting data from SM platforms to track trends in illicit drug use. Excluded were studies focused on purchasing prescription drugs from illicit online pharmacies. Selected studies used a range of SM tools/applications, including message boards, Twitter and blog/forums/platform discussions. Limitations included relevance, a lack of standardized surveillance systems and a lack of efficient algorithms to isolate relevant items. Illicit drug use is a worldwide problem, and the rise of global social networking sites has led to the evolution of a readily accessible surveillance tool. Systematic approaches need to be developed to efficiently extract and analyze illicit drug content from social networks to supplement effective prevention programs. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Project Lefty: More Bang for the Search Query
ERIC Educational Resources Information Center
Varnum, Ken
2010-01-01
This article describes the Project Lefty, a search system that, at a minimum, adds a layer on top of traditional federated search tools that will make the wait for results more worthwhile for researchers. At best, Project Lefty improves search queries and relevance rankings for web-scale discovery tools to make the results themselves more relevant…
An MPI + $X$ implementation of contact global search using Kokkos
Hansen, Glen A.; Xavier, Patrick G.; Mish, Sam P.; ...
2015-10-05
This paper describes an approach that seeks to parallelize the spatial search associated with computational contact mechanics. In contact mechanics, the purpose of the spatial search is to find “nearest neighbors,” which is the prelude to an imprinting search that resolves the interactions between the external surfaces of contacting bodies. In particular, we are interested in the contact global search portion of the spatial search associated with this operation on domain-decomposition-based meshes. Specifically, we describe an implementation that combines standard domain-decomposition-based MPI-parallel spatial search with thread-level parallelism (MPI-X) available on advanced computer architectures (those with GPU coprocessors). Our goal ismore » to demonstrate the efficacy of the MPI-X paradigm in the overall contact search. Standard MPI-parallel implementations typically use a domain decomposition of the external surfaces of bodies within the domain in an attempt to efficiently distribute computational work. This decomposition may or may not be the same as the volume decomposition associated with the host physics. The parallel contact global search phase is then employed to find and distribute surface entities (nodes and faces) that are needed to compute contact constraints between entities owned by different MPI ranks without further inter-rank communication. Key steps of the contact global search include computing bounding boxes, building surface entity (node and face) search trees and finding and distributing entities required to complete on-rank (local) spatial searches. To enable source-code portability and performance across a variety of different computer architectures, we implemented the algorithm using the Kokkos hardware abstraction library. While we targeted development towards machines with a GPU accelerator per MPI rank, we also report performance results for OpenMP with a conventional multi-core compute node per rank. Results here demonstrate a 47 % decrease in the time spent within the global search algorithm, comparing the reference ACME algorithm with the GPU implementation, on an 18M face problem using four MPI ranks. As a result, while further work remains to maximize performance on the GPU, this result illustrates the potential of the proposed implementation.« less
The NASA NEESPI Data Portal: Products, Information, and Services
NASA Technical Reports Server (NTRS)
Shen, Suhung; Leptoukh, Gregory; Loboda, Tatiana; Csiszar, Ivan; Romanov, Peter; Gerasimov, Irina
2008-01-01
Studies have indicated that land cover and use changes in Northern Eurasia influence global climate system. However, the procedures are not fully understood and it is challenging to understand the interactions between the land changes in this region and the global climate. Having integrated data collections form multiple disciplines are important for studies of climate and environmental changes. Remote sensed and model data are particularly important die to sparse in situ measurements in many Eurasia regions especially in Siberia. The NASA GES DISC (Goddard Earth Sciences Data and Information Services Center) NEESPI data portal has generated infrastructure to provide satellite remote sensing and numerical model data for atmospheric, land surface, and cryosphere. Data searching, subsetting, and downloading functions are available. ONe useful tool is the Web-based online data analysis and visualization system, Giovanni (Goddard Interactive Online Visualization ANd aNalysis Infrastructure), which allows scientists to assess easily the state and dynamics of terrestrial ecosystems in Northern Eurasia and their interactions with global climate system. Recently, we have created a metadata database prototype to expand the NASA NEESPI data portal for providing a venue for NEESPI scientists fo find the desired data easily and leveraging data sharing within NEESPI projects. The database provides product level information. The desired data can be found through navigation and free text search and narrowed down by filtering with a number of constraints. In addition, we have developed a Web Map Service (WMS) prototype to allow access data and images from difference data resources.
Rana, Gurpreet K; Bradley, Doreen R; Hamstra, Stanley J; Ross, Paula T; Schumacher, Robert E; Frohna, John G; Haftel, Hilary M; Lypson, Monica L
2011-01-01
The objective of this study was to validate an assessment instrument for MEDLINE search strategies at an academic medical center. Two approaches were used to investigate if the search assessment tool could capture performance differences in search strategy construction. First, data from an evaluation of MEDLINE searches from a pediatric resident's longitudinal assessment were investigated. Second, a cross-section of search strategies from residents in one incoming class was compared with strategies of residents graduating a year later. MEDLINE search strategies formulated by faculty who had been identified as having search expertise were used as a gold standard comparison. Participants were presented with a clinical scenario and asked to identify the search question and conduct a MEDLINE search. Two librarians rated the blinded search strategies. Search strategy scores were significantly higher for residents who received training than the comparison group with no training. There was no significant difference in search strategy scores between senior residents who received training and faculty experts. The results provide evidence for the validity of the instrument to evaluate MEDLINE search strategies. This assessment tool can measure improvements in information-seeking skills and provide data to fulfill Accreditation Council for Graduate Medical Education competencies.
NASA Astrophysics Data System (ADS)
Auluck, S. K. H.
2014-12-01
Dense plasma focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool that can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited (Auluck 2013 Phys. Plasmas 20 112501) Gratton-Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather-type plasma focus fitted to thousands of automated computations, which enables the construction of such a design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a four-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy of the drive parameter over eight decades in capacitor bank energy. The optimized geometry of plasma focus normalized to the anode radius is shown to be independent of voltage, while the optimized anode radius is shown to be related to capacitor bank inductance.
Fundamental resource-allocating model in colleges and universities based on Immune Clone Algorithms
NASA Astrophysics Data System (ADS)
Ye, Mengdie
2017-05-01
In this thesis we will seek the combination of antibodies and antigens converted from the optimal course arrangement and make an analogy with Immune Clone Algorithms. According to the character of the Algorithms, we apply clone, clone gene and clone selection to arrange courses. Clone operator can combine evolutionary search and random search, global search and local search. By cloning and clone mutating candidate solutions, we can find the global optimal solution quickly.
Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency
Sripati, Arun P.; Olson, Carl R.
2010-01-01
Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054
Finding collaborators: toward interactive discovery tools for research network systems.
Borromeo, Charles D; Schleyer, Titus K; Becich, Michael J; Hochheiser, Harry
2014-11-04
Research networking systems hold great promise for helping biomedical scientists identify collaborators with the expertise needed to build interdisciplinary teams. Although efforts to date have focused primarily on collecting and aggregating information, less attention has been paid to the design of end-user tools for using these collections to identify collaborators. To be effective, collaborator search tools must provide researchers with easy access to information relevant to their collaboration needs. The aim was to study user requirements and preferences for research networking system collaborator search tools and to design and evaluate a functional prototype. Paper prototypes exploring possible interface designs were presented to 18 participants in semistructured interviews aimed at eliciting collaborator search needs. Interview data were coded and analyzed to identify recurrent themes and related software requirements. Analysis results and elements from paper prototypes were used to design a Web-based prototype using the D3 JavaScript library and VIVO data. Preliminary usability studies asked 20 participants to use the tool and to provide feedback through semistructured interviews and completion of the System Usability Scale (SUS). Initial interviews identified consensus regarding several novel requirements for collaborator search tools, including chronological display of publication and research funding information, the need for conjunctive keyword searches, and tools for tracking candidate collaborators. Participant responses were positive (SUS score: mean 76.4%, SD 13.9). Opportunities for improving the interface design were identified. Interactive, timeline-based displays that support comparison of researcher productivity in funding and publication have the potential to effectively support searching for collaborators. Further refinement and longitudinal studies may be needed to better understand the implications of collaborator search tools for researcher workflows.
Finding Collaborators: Toward Interactive Discovery Tools for Research Network Systems
Schleyer, Titus K; Becich, Michael J; Hochheiser, Harry
2014-01-01
Background Research networking systems hold great promise for helping biomedical scientists identify collaborators with the expertise needed to build interdisciplinary teams. Although efforts to date have focused primarily on collecting and aggregating information, less attention has been paid to the design of end-user tools for using these collections to identify collaborators. To be effective, collaborator search tools must provide researchers with easy access to information relevant to their collaboration needs. Objective The aim was to study user requirements and preferences for research networking system collaborator search tools and to design and evaluate a functional prototype. Methods Paper prototypes exploring possible interface designs were presented to 18 participants in semistructured interviews aimed at eliciting collaborator search needs. Interview data were coded and analyzed to identify recurrent themes and related software requirements. Analysis results and elements from paper prototypes were used to design a Web-based prototype using the D3 JavaScript library and VIVO data. Preliminary usability studies asked 20 participants to use the tool and to provide feedback through semistructured interviews and completion of the System Usability Scale (SUS). Results Initial interviews identified consensus regarding several novel requirements for collaborator search tools, including chronological display of publication and research funding information, the need for conjunctive keyword searches, and tools for tracking candidate collaborators. Participant responses were positive (SUS score: mean 76.4%, SD 13.9). Opportunities for improving the interface design were identified. Conclusions Interactive, timeline-based displays that support comparison of researcher productivity in funding and publication have the potential to effectively support searching for collaborators. Further refinement and longitudinal studies may be needed to better understand the implications of collaborator search tools for researcher workflows. PMID:25370463
Data Discovery of Big and Diverse Climate Change Datasets - Options, Practices and Challenges
NASA Astrophysics Data System (ADS)
Palanisamy, G.; Boden, T.; McCord, R. A.; Frame, M. T.
2013-12-01
Developing data search tools is a very common, but often confusing, task for most of the data intensive scientific projects. These search interfaces need to be continually improved to handle the ever increasing diversity and volume of data collections. There are many aspects which determine the type of search tool a project needs to provide to their user community. These include: number of datasets, amount and consistency of discovery metadata, ancillary information such as availability of quality information and provenance, and availability of similar datasets from other distributed sources. Environmental Data Science and Systems (EDSS) group within the Environmental Science Division at the Oak Ridge National Laboratory has a long history of successfully managing diverse and big observational datasets for various scientific programs via various data centers such as DOE's Atmospheric Radiation Measurement Program (ARM), DOE's Carbon Dioxide Information and Analysis Center (CDIAC), USGS's Core Science Analytics and Synthesis (CSAS) metadata Clearinghouse and NASA's Distributed Active Archive Center (ORNL DAAC). This talk will showcase some of the recent developments for improving the data discovery within these centers The DOE ARM program recently developed a data discovery tool which allows users to search and discover over 4000 observational datasets. These datasets are key to the research efforts related to global climate change. The ARM discovery tool features many new functions such as filtered and faceted search logic, multi-pass data selection, filtering data based on data quality, graphical views of data quality and availability, direct access to data quality reports, and data plots. The ARM Archive also provides discovery metadata to other broader metadata clearinghouses such as ESGF, IASOA, and GOS. In addition to the new interface, ARM is also currently working on providing DOI metadata records to publishers such as Thomson Reuters and Elsevier. The ARM program also provides a standards based online metadata editor (OME) for PIs to submit their data to the ARM Data Archive. USGS CSAS metadata Clearinghouse aggregates metadata records from several USGS projects and other partner organizations. The Clearinghouse allows users to search and discover over 100,000 biological and ecological datasets from a single web portal. The Clearinghouse also enabled some new data discovery functions such as enhanced geo-spatial searches based on land and ocean classifications, metadata completeness rankings, data linkage via digital object identifiers (DOIs), and semantically enhanced keyword searches. The Clearinghouse also currently working on enabling a dashboard which allows the data providers to look at various statistics such as number their records accessed via the Clearinghouse, most popular keywords, metadata quality report and DOI creation service. The Clearinghouse also publishes metadata records to broader portals such as NSF DataONE and Data.gov. The author will also present how these capabilities are currently reused by the recent and upcoming data centers such as DOE's NGEE-Arctic project. References: [1] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics, 3(1-2), 87-94. [2]Devarakonda, R., Shrestha, B., Palanisamy, G., Hook, L., Killeffer, T., Krassovski, M., ... & Frame, M. (2014, October). OME: Tool for generating and managing metadata to handle BigData. In BigData Conference (pp. 8-10).
Software Assessment of the Global Force Management (GFM) Search Capability Study
2017-02-01
Study by Timothy Hanratty, Mark Mittrick, Alex Vertlieb, and Frederick Brundick Approved for public release; distribution...Army Research Laboratory Software Assessment of the Global Force Management (GFM) Search Capability Study by Timothy Hanratty, Mark Mittrick...Force Management (GFM) Search Capability Study 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Timothy
NASA Astrophysics Data System (ADS)
Overoye, D.; Lewis, C.
2016-12-01
The Global Learning and Observations to Benefit the Environment (GLOBE) Program is a worldwide hands-on, primary and secondary school-based science and education program founded on Earth Day 1995. Implemented in 117 countries, GLOBE promotes the teaching and learning of science, supporting students, teachers and scientists worldwide to collaborate with each other on inquiry-based investigations of the Earth system. As an international platform supporting a large number and variety of stakeholders, the GLOBE Data Information System (DIS) was re-built with the goal of providing users the support needed to foster and develop collaboration between teachers, students and scientists while supporting the collection and visualization of over 50 different earth science investigations (protocols). There have been many challenges to consider as we have worked to prototype and build various tools to support collaboration across the GLOBE community - language, security, time zones, user roles and the Child Online Protection Act (COPA) to name a few. During the last 3 years the re-built DIS has been in operation we have supported user to user collaboration, school to school collaboration, project/campaign to user collaboration and scientist to scientist collaboration. We have built search tools to facilitate finding collaboration partners. The tools and direction continue to evolve based on feedback, evolving needs and changes in technology. With this paper we discuss our approach for dealing with some of the collaboration challenges, review tools built to encourage and support collaboration, and analyze which tools have been successful and which have not. We will review new ideas for collaboration in the GLOBE community that are guiding upcoming development.
Wilkinson, Jessica; Goff, Morgan; Rusoja, Evan; Hanson, Carl; Swanson, Robert Chad
2018-06-01
This review of systems thinking (ST) case studies seeks to compile and analyse cases from ST literature and provide practitioners with a reference for ST in health practice. Particular attention was given to (1) reviewing the frequency and use of key ST terms, methods, and tools in the context of health, and (2) extracting and analysing longitudinal themes across cases. A systematic search of databases was conducted, and a total of 36 case studies were identified. A combination of integrative and inductive qualitative approaches to analysis was used. Most cases identified took place in high-income countries and applied ST retrospectively. The most commonly used ST terms were agent/stakeholder/actor (n = 29), interdependent/interconnected (n = 28), emergence (n = 26), and adaptability/adaptation (n = 26). Common ST methods and tools were largely underutilized. Social network analysis was the most commonly used method (n = 4), and innovation or change management history was the most frequently used tool (n = 11). Four overarching themes were identified; the importance of the interdependent and interconnected nature of a health system, characteristics of leaders in a complex adaptive system, the benefits of using ST, and barriers to implementing ST. This review revealed that while much has been written about the potential benefits of applying ST to health, it has yet to completely transition from theory to practice. There is however evidence of the practical use of an ST lens as well as specific methods and tools. With clear examples of ST applications, the global health community will be better equipped to understand and address key health challenges. © 2017 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Georgas, Helen
2014-01-01
This study examines the information-seeking behavior of undergraduate students within a research context. Student searches were recorded while the participants used Google and a library (federated) search tool to find sources (one book, two articles, and one other source of their choosing) for a selected topic. The undergraduates in this study…
ERIC Educational Resources Information Center
Georgas, Helen
2013-01-01
Federated searching was once touted as the library world's answer to Google, but ten years since federated searching technology's inception, how does it actually compare? This study focuses on undergraduate student preferences and perceptions when doing research using both Google and a federated search tool. Students were asked about their…
Citizen science, GIS, and the global hunt for landslides
NASA Astrophysics Data System (ADS)
Juang, C.; Stanley, T.; Kirschbaum, D.
2017-12-01
Landslides occur across the United States and around the world, causing much suffering and infrastructure damage. Many of these events have been recorded in the Global Landslide Catalog (GLC), a worldwide record of recently rainfall-triggered landslides. The extent and composition of this database has been affected by the limits of media search tools and available staffing. Citizen scientists could expand the effort exponentially, as well as diversify the knowledge base of the research team. In order to enable this collaboration the NASA Center for Climate Simulation has created a GIS portal for viewing, editing, and managing the GLC. The data is also exposed through a Rest API, for easy incorporation into geospatial websites by third parties. Future developments may include the ability to store polygons delineating large landslides, digitization from recent satellite imagery, and the establishment of a community for international landslide research that is open to both lay and academic users.
New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and UrbIS
NASA Astrophysics Data System (ADS)
Crow, M. C.; Devarakonda, R.; Hook, L.; Killeffer, T.; Krassovski, M.; Boden, T.; King, A. W.; Wullschleger, S. D.
2016-12-01
Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This discussion describes tools being used in two different projects at Oak Ridge National Laboratory (ORNL), but at different stages of the data lifecycle. The Metadata Entry and Data Search Tool is being used for the documentation, archival, and data discovery stages for the Next Generation Ecosystem Experiment - Arctic (NGEE Arctic) project while the Urban Information Systems (UrbIS) Data Catalog is being used to support indexing, cataloging, and searching. The NGEE Arctic Online Metadata Entry Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The UrbIS Data Catalog is a data discovery tool supported by the Mercury cataloging framework [2] which aims to compile urban environmental data from around the world into one location, and be searchable via a user-friendly interface. Each data record conveniently displays its title, source, and date range, and features: (1) a button for a quick view of the metadata, (2) a direct link to the data and, for some data sets, (3) a button for visualizing the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for searching by area. References: [1] Devarakonda, Ranjeet, et al. "Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example." Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. [2] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics, 3(1-2), 87-94.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
NASA Astrophysics Data System (ADS)
Zhuang, Yufei; Huang, Haibin
2014-02-01
A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.
Conformational flexibility of two RNA trimers explored by computational tools and database search.
Fadrná, Eva; Koca, Jaroslav
2003-04-01
Two RNA sequences, AAA and AUG, were studied by the conformational search program CICADA and by molecular dynamics (MD) in the framework of the AMBER force field, and also via thorough PDB database search. CICADA was used to provide detailed information about conformers and conformational interconversions on the energy surfaces of the above molecules. Several conformational families were found for both sequences. Analysis of the results shows differences, especially between the energy of the single families, and also in flexibility and concerted conformational movement. Therefore, several MD trajectories (altogether 16 ns) were run to obtain more details about both the stability of conformers belonging to different conformational families and about the dynamics of the two systems. Results show that the trajectories strongly depend on the starting structure. When the MD start from the global minimum found by CICADA, they provide a stable run, while MD starting from another conformational family generates a trajectory where several different conformational families are visited. The results obtained by theoretical methods are compared with the thorough database search data. It is concluded that all except for the highest energy conformational families found in theoretical result also appear in experimental data. Registry numbers: adenylyl-(3' --> 5')-adenylyl-(3' --> 5')-adenosine [917-44-2] adenylyl-(3' --> 5')-uridylyl-(3' --> 5')-guanosine [3494-35-7].
NASA Astrophysics Data System (ADS)
Wu, J.; Yang, Y.; Luo, Q.; Wu, J.
2012-12-01
This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.
Electronic Collection Management and Electronic Information Services
2004-12-01
federated search tools are still being perfected with much debate surrounding their use. Encouragingly, as the federated search tools have evolved...institutional repositories to be included in a federated search process, libraries would have to harvest the metadata from the repositories and then make...providers in Library High Tech News. At this time, federated search engines serve some user groups better than others. Undergraduate students are well
NASA Astrophysics Data System (ADS)
Xu, Shuo; Ji, Ze; Truong Pham, Duc; Yu, Fan
2011-11-01
The simultaneous mission assignment and home allocation for hospital service robots studied is a Multidimensional Assignment Problem (MAP) with multiobjectives and multiconstraints. A population-based metaheuristic, the Binary Bees Algorithm (BBA), is proposed to optimize this NP-hard problem. Inspired by the foraging mechanism of honeybees, the BBA's most important feature is an explicit functional partitioning between global search and local search for exploration and exploitation, respectively. Its key parts consist of adaptive global search, three-step elitism selection (constraint handling, non-dominated solutions selection, and diversity preservation), and elites-centred local search within a Hamming neighbourhood. Two comparative experiments were conducted to investigate its single objective optimization, optimization effectiveness (indexed by the S-metric and C-metric) and optimization efficiency (indexed by computational burden and CPU time) in detail. The BBA outperformed its competitors in almost all the quantitative indices. Hence, the above overall scheme, and particularly the searching history-adapted global search strategy was validated.
Improving immunization approaches to cholera.
Saha, Amit; Rosewell, Alexander; Hayen, Andrew; MacIntyre, C Raina; Qadri, Firdausi
2017-03-01
Cholera's impact is greatest in resource-limited countries. In the last decade several large epidemics have led to a global push to improve and implement the tools for cholera prevention and control. Areas covered: PubMed, Google Scholar and the WHO website were searched to review the literature and summarize the current status of cholera vaccines to make recommendations on improving immunization approaches to cholera. Oral cholera vaccines (OCVs) have demonstrated their effectiveness in endemic, outbreak response and emergency settings, highlighting their potential for wider adoption. While two doses of the currently available OCVs are recommended by manufacturers, a single dose would be easier to implement. Encouragingly, recent studies have shown that cold chain requirements may no longer be essential. The establishment of the global OCV stockpile in 2013 has been a major advance in cholera preparedness. New killed and live-attenuated vaccines are being actively explored as candidate vaccines for endemic settings and/or as a traveller's vaccine. The recent advances in cholera vaccination approaches should be considered in the global cholera control strategy. Expert commentary: The development of affordable cholera vaccines is a major success to improve cholera control. New vaccines and country specific interventions will further reduce the burden of this disease globally.
Oliveira, Jorge; Gamito, Pedro; Alghazzawi, Daniyal M; Fardoun, Habib M; Rosa, Pedro J; Sousa, Tatiana; Picareli, Luís Felipe; Morais, Diogo; Lopes, Paulo
2017-08-14
This investigation sought to understand whether performance in naturalistic virtual reality tasks for cognitive assessment relates to the cognitive domains that are supposed to be measured. The Shoe Closet Test (SCT) was developed based on a simple visual search task involving attention skills, in which participants have to match each pair of shoes with the colors of the compartments in a virtual shoe closet. The interaction within the virtual environment was made using the Microsoft Kinect. The measures consisted of concurrent paper-and-pencil neurocognitive tests for global cognitive functioning, executive functions, attention, psychomotor ability, and the outcomes of the SCT. The results showed that the SCT correlated with global cognitive performance as measured with the Montreal Cognitive Assessment (MoCA). The SCT explained one third of the total variance of this test and revealed good sensitivity and specificity in discriminating scores below one standard deviation in this screening tool. These findings suggest that performance of such functional tasks involves a broad range of cognitive processes that are associated with global cognitive functioning and that may be difficult to isolate through paper-and-pencil neurocognitive tests.
Parasail: SIMD C library for global, semi-global, and local pairwise sequence alignments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Jeffrey A.
Sequence alignment algorithms are a key component of many bioinformatics applications. Though various fast Smith-Waterman local sequence alignment implementations have been developed for x86 CPUs, most are embedded into larger database search tools. In addition, fast implementations of Needleman-Wunsch global sequence alignment and its semi-global variants are not as widespread. This article presents the first software library for local, global, and semi-global pairwise intra-sequence alignments and improves the performance of previous intra-sequence implementations. As a result, a faster intra-sequence pairwise alignment implementation is described and benchmarked. Using a 375 residue query sequence a speed of 136 billion cell updates permore » second (GCUPS) was achieved on a dual Intel Xeon E5-2670 12-core processor system, the highest reported for an implementation based on Farrar’s ’striped’ approach. When using only a single thread, parasail was 1.7 times faster than Rognes’s SWIPE. For many score matrices, parasail is faster than BLAST. The software library is designed for 64 bit Linux, OS X, or Windows on processors with SSE2, SSE41, or AVX2. Source code is available from https://github.com/jeffdaily/parasail under the Battelle BSD-style license. In conclusion, applications that require optimal alignment scores could benefit from the improved performance. For the first time, SIMD global, semi-global, and local alignments are available in a stand-alone C library.« less
Parasail: SIMD C library for global, semi-global, and local pairwise sequence alignments
Daily, Jeffrey A.
2016-02-10
Sequence alignment algorithms are a key component of many bioinformatics applications. Though various fast Smith-Waterman local sequence alignment implementations have been developed for x86 CPUs, most are embedded into larger database search tools. In addition, fast implementations of Needleman-Wunsch global sequence alignment and its semi-global variants are not as widespread. This article presents the first software library for local, global, and semi-global pairwise intra-sequence alignments and improves the performance of previous intra-sequence implementations. As a result, a faster intra-sequence pairwise alignment implementation is described and benchmarked. Using a 375 residue query sequence a speed of 136 billion cell updates permore » second (GCUPS) was achieved on a dual Intel Xeon E5-2670 12-core processor system, the highest reported for an implementation based on Farrar’s ’striped’ approach. When using only a single thread, parasail was 1.7 times faster than Rognes’s SWIPE. For many score matrices, parasail is faster than BLAST. The software library is designed for 64 bit Linux, OS X, or Windows on processors with SSE2, SSE41, or AVX2. Source code is available from https://github.com/jeffdaily/parasail under the Battelle BSD-style license. In conclusion, applications that require optimal alignment scores could benefit from the improved performance. For the first time, SIMD global, semi-global, and local alignments are available in a stand-alone C library.« less
A tool for simulating parallel branch-and-bound methods
NASA Astrophysics Data System (ADS)
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
PDBe: Protein Data Bank in Europe
Gutmanas, Aleksandras; Alhroub, Younes; Battle, Gary M.; Berrisford, John M.; Bochet, Estelle; Conroy, Matthew J.; Dana, Jose M.; Fernandez Montecelo, Manuel A.; van Ginkel, Glen; Gore, Swanand P.; Haslam, Pauline; Hatherley, Rowan; Hendrickx, Pieter M.S.; Hirshberg, Miriam; Lagerstedt, Ingvar; Mir, Saqib; Mukhopadhyay, Abhik; Oldfield, Thomas J.; Patwardhan, Ardan; Rinaldi, Luana; Sahni, Gaurav; Sanz-García, Eduardo; Sen, Sanchayita; Slowley, Robert A.; Velankar, Sameer; Wainwright, Michael E.; Kleywegt, Gerard J.
2014-01-01
The Protein Data Bank in Europe (pdbe.org) is a founding member of the Worldwide PDB consortium (wwPDB; wwpdb.org) and as such is actively engaged in the deposition, annotation, remediation and dissemination of macromolecular structure data through the single global archive for such data, the PDB. Similarly, PDBe is a member of the EMDataBank organisation (emdatabank.org), which manages the EMDB archive for electron microscopy data. PDBe also develops tools that help the biomedical science community to make effective use of the data in the PDB and EMDB for their research. Here we describe new or improved services, including updated SIFTS mappings to other bioinformatics resources, a new browser for the PDB archive based on Gene Ontology (GO) annotation, updates to the analysis of Nuclear Magnetic Resonance-derived structures, redesigned search and browse interfaces, and new or updated visualisation and validation tools for EMDB entries. PMID:24288376
Motalebi G, Masoud; Keshavarz Mohammadi, Nastaran; Kuhn, Karl; Ramezankhani, Ali; Azari, Mansour R
2018-06-01
Health promoting workplace frameworks provide a holistic view on determinants of workplace health and the link between individuals, work and environment, however, the operationalization of these frameworks has not been very clear. This study provides a typology of the different understandings, frameworks/tools used in the workplace health promotion practice or research worldwide. It discusses the degree of their conformity with Ottawa Charter's spirit and the key actions expected to be implemented in health promoting settings such as workplaces. A comprehensive online search was conducted utilizing relevant key words. The search also included official websites of related international, regional, and national organizations. After exclusion, 27 texts were analysed utilizing conventional content analyses. The results of the analysis were categorized as dimensions (level or main structure) of a healthy or health promoting workplaces and subcategorized characteristics/criteria of healthy/health promoting workplace. Our analysis shows diversity and ambiguity in the workplace health literature regarding domains and characteristics of a healthy/health promoting workplace. This may have roots in lack of a common understanding of the concepts or different social and work environment context. Development of global or national health promoting workplace standards in a participatory process might be considered as a potential solution.
Swarm intelligence metaheuristics for enhanced data analysis and optimization.
Hanrahan, Grady
2011-09-21
The swarm intelligence (SI) computing paradigm has proven itself as a comprehensive means of solving complicated analytical chemistry problems by emulating biologically-inspired processes. As global optimum search metaheuristics, associated algorithms have been widely used in training neural networks, function optimization, prediction and classification, and in a variety of process-based analytical applications. The goal of this review is to provide readers with critical insight into the utility of swarm intelligence tools as methods for solving complex chemical problems. Consideration will be given to algorithm development, ease of implementation and model performance, detailing subsequent influences on a number of application areas in the analytical, bioanalytical and detection sciences.
A Descriptive and Interpretative Information System for the IODP
NASA Astrophysics Data System (ADS)
Blum, P.; Foster, P. A.; Mateo, Z.
2006-12-01
The ODP/IODP has a long and rich history of collecting descriptive and interpretative information (DESCINFO) from rock and sediment cores from the world's oceans. Unlike instrumental data, DESCINFO generated by subject experts is biased by the scientific and cultural background of the observers and their choices of classification schemes. As a result, global searches of DESCINFO and its integration with other data are problematical. To address this issue, the IODP-USIO is in the process of designing and implementing a DESCINFO system for IODP Phase 2 (2007-2013) that meets the user expectations expressed over the past decade. The requirements include support of (1) detailed, material property-based descriptions as well as classification-based descriptions; (2) global searches by physical sample and digital data sources as well as any of the descriptive parameters; (3) user-friendly data capture tools for a variety of workflows; and (4) extensive visualization of DESCINFO data along with instrumental data and images; and (5) portability/interoperability such that the system can work with database schemas of other organizations - a specific challenge given the schema and semantic heterogeneity not only among the three IODP operators but within the geosciences in general. The DESCINFO approach is based on the definition of a set of generic observable parameters that are populated with numeric or text values. Text values are derived from controlled, extensible hierarchical value lists that allow descriptions at the appropriate level of detail and ensure successful data searches. Material descriptions can be completed independently of domain-specific classifications, genetic concepts, and interpretative frameworks.
Spjuth, Ola; Krestyaninova, Maria; Hastings, Janna; Shen, Huei-Yi; Heikkinen, Jani; Waldenberger, Melanie; Langhammer, Arnulf; Ladenvall, Claes; Esko, Tõnu; Persson, Mats-Åke; Heggland, Jon; Dietrich, Joern; Ose, Sandra; Gieger, Christian; Ried, Janina S; Peters, Annette; Fortier, Isabel; de Geus, Eco JC; Klovins, Janis; Zaharenko, Linda; Willemsen, Gonneke; Hottenga, Jouke-Jan; Litton, Jan-Eric; Karvanen, Juha; Boomsma, Dorret I; Groop, Leif; Rung, Johan; Palmgren, Juni; Pedersen, Nancy L; McCarthy, Mark I; van Duijn, Cornelia M; Hveem, Kristian; Metspalu, Andres; Ripatti, Samuli; Prokopenko, Inga; Harris, Jennifer R
2016-01-01
A wealth of biospecimen samples are stored in modern globally distributed biobanks. Biomedical researchers worldwide need to be able to combine the available resources to improve the power of large-scale studies. A prerequisite for this effort is to be able to search and access phenotypic, clinical and other information about samples that are currently stored at biobanks in an integrated manner. However, privacy issues together with heterogeneous information systems and the lack of agreed-upon vocabularies have made specimen searching across multiple biobanks extremely challenging. We describe three case studies where we have linked samples and sample descriptions in order to facilitate global searching of available samples for research. The use cases include the ENGAGE (European Network for Genetic and Genomic Epidemiology) consortium comprising at least 39 cohorts, the SUMMIT (surrogate markers for micro- and macro-vascular hard endpoints for innovative diabetes tools) consortium and a pilot for data integration between a Swedish clinical health registry and a biobank. We used the Sample avAILability (SAIL) method for data linking: first, created harmonised variables and then annotated and made searchable information on the number of specimens available in individual biobanks for various phenotypic categories. By operating on this categorised availability data we sidestep many obstacles related to privacy that arise when handling real values and show that harmonised and annotated records about data availability across disparate biomedical archives provide a key methodological advance in pre-analysis exchange of information between biobanks, that is, during the project planning phase. PMID:26306643
Comparison of three web-scale discovery services for health sciences research.
Hanneke, Rosie; O'Brien, Kelly K
2016-04-01
The purpose of this study was to investigate the relative effectiveness of three web-scale discovery (WSD) tools in answering health sciences search queries. Simple keyword searches, based on topics from six health sciences disciplines, were run at multiple real-world implementations of EBSCO Discovery Service (EDS), Ex Libris's Primo, and ProQuest's Summon. Each WSD tool was evaluated in its ability to retrieve relevant results and in its coverage of MEDLINE content. All WSD tools returned between 50%-60% relevant results. Primo returned a higher number of duplicate results than the other 2 WSD products. Summon results were more relevant when search terms were automatically mapped to controlled vocabulary. EDS indexed the largest number of MEDLINE citations, followed closely by Summon. Additionally, keyword searches in all 3 WSD tools retrieved relevant material that was not found with precision (Medical Subject Headings) searches in MEDLINE. None of the 3 WSD products studied was overwhelmingly more effective in returning relevant results. While difficult to place the figure of 50%-60% relevance in context, it implies a strong likelihood that the average user would be able to find satisfactory sources on the first page of search results using a rudimentary keyword search. The discovery of additional relevant material beyond that retrieved from MEDLINE indicates WSD tools' value as a supplement to traditional resources for health sciences researchers.
Comparison of three web-scale discovery services for health sciences research*
Hanneke, Rosie; O'Brien, Kelly K.
2016-01-01
Objective The purpose of this study was to investigate the relative effectiveness of three web-scale discovery (WSD) tools in answering health sciences search queries. Methods Simple keyword searches, based on topics from six health sciences disciplines, were run at multiple real-world implementations of EBSCO Discovery Service (EDS), Ex Libris's Primo, and ProQuest's Summon. Each WSD tool was evaluated in its ability to retrieve relevant results and in its coverage of MEDLINE content. Results All WSD tools returned between 50%–60% relevant results. Primo returned a higher number of duplicate results than the other 2 WSD products. Summon results were more relevant when search terms were automatically mapped to controlled vocabulary. EDS indexed the largest number of MEDLINE citations, followed closely by Summon. Additionally, keyword searches in all 3 WSD tools retrieved relevant material that was not found with precision (Medical Subject Headings) searches in MEDLINE. Conclusions None of the 3 WSD products studied was overwhelmingly more effective in returning relevant results. While difficult to place the figure of 50%–60% relevance in context, it implies a strong likelihood that the average user would be able to find satisfactory sources on the first page of search results using a rudimentary keyword search. The discovery of additional relevant material beyond that retrieved from MEDLINE indicates WSD tools' value as a supplement to traditional resources for health sciences researchers. PMID:27076797
ERIC Educational Resources Information Center
Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.
2013-01-01
Relative to low scorers, high scorers on the Autism-Spectrum Quotient (AQ) show enhanced performance on the Embedded Figures Test and the Radial Frequency search task (RFST), which has been attributed to both enhanced local processing and differences in combining global percepts. We investigate the role of local and global processing further using…
Labbé, Mathilde; Young, Meredith; Nguyen, Lily H P
2017-10-08
To support the development of programs of assessment of technical skills in the operating room (OR), we systematically reviewed the literature to identify assessment tools specific to otolaryngology-head and neck surgery (OTL-HNS) core procedures and summarized their characteristics. We systematically searched Embase, MEDLINE, PubMed, and Cochrane to identify and report on assessment tools that can be used to assess residents' technical surgical skills in the operating room for OTL-HNS core procedures. Of the 736 unique titles retrieved, 16 articles met inclusion criteria, covering 11 different procedures (in otology, rhinology, laryngology, head and neck, and general otolaryngology). The tools were composed of a task-specific checklist and/or global rating scale and were developed in the OR, on human cadavers, or in a simulation setting. Our study reports on published tools for assessing technical skills for OTL-HNS residents during core procedures conducted in the OR. These assessment tools could facilitate the provision of timely feedback to trainees including specific goals for improvement. However, the paucity of publications suggests little agreement on how to best perform work-based direct-observation assessment for core surgical procedures in OTL-HNS. The sparsity of tools specific to OTL-HNS may become a barrier to a fluid transition to competency-based medical education. Laryngoscope, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Doppler Imaging of Exoplanets and Brown Dwarfs
NASA Astrophysics Data System (ADS)
Crossfield, I.; Biller, B.; Schlieder, J.; Deacon, N.; Bonnefoy, M.; Homeier, D.; Allard, F.; Buenzli, E.; Henning, T.; Brandner, W.; Goldman, Bertr; Kopytova, T.
2014-03-01
Doppler Imaging produces 2D global maps. When applied to cool planets or more massive brown dwarfs, it can map atmospheric features and track global weather patterns. The first substellar map, of the 2pc-distant brown dwarf Luhman 16B (Crossfeld et al. 2014), revealed patchy regions of thin & thick clouds. Here, I investigate the feasibility of future Doppler Imaging of additional objects. Searching the literature, I find that all 3 of P, v sin i, and variability are published for 22 brown dwarfs. At least one datum exists for 333 targets. The sample is very incomplete below ~L5; we need more surveys to find the best targets for Doppler Imaging! I estimate limiting magnitudes for Doppler Imaging with various hi-resolution near-infrared spectrographs. Only a handful of objects - at the M/L and L/T transitions - can be mapped with current tools. Large telescopes such as TMT and GMT will allow Doppler Imaging of many dozens of brown dwarfs and the brightest exoplanets. More targets beyond type L5 likely remain to be found. Future observations will let us probe the global atmospheric dynamics of many diverse objects.
NASA Astrophysics Data System (ADS)
Protopopescu, V.; D'Helon, C.; Barhen, J.
2003-06-01
A constant-time solution of the continuous global optimization problem (GOP) is obtained by using an ensemble algorithm. We show that under certain assumptions, the solution can be guaranteed by mapping the GOP onto a discrete unsorted search problem, whereupon Brüschweiler's ensemble search algorithm is applied. For adequate sensitivities of the measurement technique, the query complexity of the ensemble search algorithm depends linearly on the size of the function's domain. Advantages and limitations of an eventual NMR implementation are discussed.
Finding Your Voice: Talent Development Centers and the Academic Talent Search
ERIC Educational Resources Information Center
Rushneck, Amy S.
2012-01-01
Talent Development Centers are just one of many tools every family, teacher, and gifted advocate should have in their tool box. To understand the importance of Talent Development Centers, it is essential to also understand the Academic Talent Search Program. Talent Search participants who obtain scores comparable to college-bound high school…
Personalised Search Tool for Teachers--PoSTech!
ERIC Educational Resources Information Center
Seyedarabi, Faezeh; Peterson, Don; Keenoy, Kevin
2005-01-01
One of the ways in which teachers tend to "personalise" to the needs of their students is by complementing their teaching materials with online resources. However, the current online resources are designed in such a way that only allows teachers to customise their search and not personalise. Therefore, a Personalised Search Tool for…
The EMBL-EBI bioinformatics web and programmatic tools framework.
Li, Weizhong; Cowley, Andrew; Uludag, Mahmut; Gur, Tamer; McWilliam, Hamish; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Lopez, Rodrigo
2015-07-01
Since 2009 the EMBL-EBI Job Dispatcher framework has provided free access to a range of mainstream sequence analysis applications. These include sequence similarity search services (https://www.ebi.ac.uk/Tools/sss/) such as BLAST, FASTA and PSI-Search, multiple sequence alignment tools (https://www.ebi.ac.uk/Tools/msa/) such as Clustal Omega, MAFFT and T-Coffee, and other sequence analysis tools (https://www.ebi.ac.uk/Tools/pfa/) such as InterProScan. Through these services users can search mainstream sequence databases such as ENA, UniProt and Ensembl Genomes, utilising a uniform web interface or systematically through Web Services interfaces (https://www.ebi.ac.uk/Tools/webservices/) using common programming languages, and obtain enriched results with novel visualisations. Integration with EBI Search (https://www.ebi.ac.uk/ebisearch/) and the dbfetch retrieval service (https://www.ebi.ac.uk/Tools/dbfetch/) further expands the usefulness of the framework. New tools and updates such as NCBI BLAST+, InterProScan 5 and PfamScan, new categories such as RNA analysis tools (https://www.ebi.ac.uk/Tools/rna/), new databases such as ENA non-coding, WormBase ParaSite, Pfam and Rfam, and new workflow methods, together with the retirement of depreciated services, ensure that the framework remains relevant to today's biological community. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Subjective global assessment of nutritional status – A systematic review of the literature.
da Silva Fink, Jaqueline; Daniel de Mello, Paula; Daniel de Mello, Elza
2015-10-01
Subjective Global Assessment (SGA) is a nutritional assessment tool widely used in hospital clinical practice, even though it is not exempted of limitations in relation to its use. This systematic review intended to update knowledge on the performance of SGA as a method for the assessment of the nutritional status of hospitalized adults. PubMed data base was consulted, using the search term "subjective global assessment". Studies published in English, Portuguese or Spanish, between 2002 and 2012 were selected, excluding those not found in full, letters to the editor, pilot studies, narrative reviews, studies with n < 30, studies with population younger than 18 years of age, research with non-hospitalized populations or those which used a modified version of the SGA. Of 454 eligible studies, 110 presented eligibility criteria. After applying the exclusion criteria, 21 studies were selected, 6 with surgical patients, 7 with clinical patients, and 8 with both. Most studies demonstrated SGA performance similar or better than the usual assessment methods for nutritional status, such as anthropometry and laboratory data, but the same result was not found when comparing SGA and nutritional screening methods. Recently published literature demonstrates SGA as a valid tool for the nutritional diagnosis of hospitalized clinical and surgical patients, and point to a potential superiority of nutritional screening methods in the early detection of malnutrition. Copyright © 2014 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Developing and using a rubric for evaluating evidence-based medicine point-of-care tools.
Shurtz, Suzanne; Foster, Margaret J
2011-07-01
The research sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library. The authors searched the literature for EBM tool evaluations and found that most previous reviews were designed to evaluate the ability of an EBM tool to answer a clinical question. The researchers' goal was to develop and complete rubrics for assessing these tools based on criteria for a general evaluation of tools (reviewing content, search options, quality control, and grading) and criteria for an evaluation of clinical summaries (searching tools for treatments of common diagnoses and evaluating summaries for quality control). Differences between EBM tools' options, content coverage, and usability were minimal. However, the products' methods for locating and grading evidence varied widely in transparency and process. As EBM tools are constantly updating and evolving, evaluation of these tools needs to be conducted frequently. Standards for evaluating EBM tools need to be established, with one method being the use of objective rubrics. In addition, EBM tools need to provide more information about authorship, reviewers, methods for evidence collection, and grading system employed.
Fan, Mingyi; Hu, Jiwei; Cao, Rensheng; Ruan, Wenqian; Wei, Xionghui
2018-06-01
Water pollution occurs mainly due to inorganic and organic pollutants, such as nutrients, heavy metals and persistent organic pollutants. For the modeling and optimization of pollutants removal, artificial intelligence (AI) has been used as a major tool in the experimental design that can generate the optimal operational variables, since AI has recently gained a tremendous advance. The present review describes the fundamentals, advantages and limitations of AI tools. Artificial neural networks (ANNs) are the AI tools frequently adopted to predict the pollutants removal processes because of their capabilities of self-learning and self-adapting, while genetic algorithm (GA) and particle swarm optimization (PSO) are also useful AI methodologies in efficient search for the global optima. This article summarizes the modeling and optimization of pollutants removal processes in water treatment by using multilayer perception, fuzzy neural, radial basis function and self-organizing map networks. Furthermore, the results conclude that the hybrid models of ANNs with GA and PSO can be successfully applied in water treatment with satisfactory accuracies. Finally, the limitations of current AI tools and their new developments are also highlighted for prospective applications in the environmental protection. Copyright © 2018 Elsevier Ltd. All rights reserved.
Adversarial search by evolutionary computation.
Hong, T P; Huang, K Y; Lin, W Y
2001-01-01
In this paper, we consider the problem of finding good next moves in two-player games. Traditional search algorithms, such as minimax and alpha-beta pruning, suffer great temporal and spatial expansion when exploring deeply into search trees to find better next moves. The evolution of genetic algorithms with the ability to find global or near global optima in limited time seems promising, but they are inept at finding compound optima, such as the minimax in a game-search tree. We thus propose a new genetic algorithm-based approach that can find a good next move by reserving the board evaluation values of new offspring in a partial game-search tree. Experiments show that solution accuracy and search speed are greatly improved by our algorithm.
Global optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Arora, Jasbir S.
1990-01-01
The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.
GeoViQua: quality-aware geospatial data discovery and evaluation
NASA Astrophysics Data System (ADS)
Bigagli, L.; Papeschi, F.; Mazzetti, P.; Nativi, S.
2012-04-01
GeoViQua (QUAlity aware VIsualization for the Global Earth Observation System of Systems) is a recently started FP7 project aiming at complementing the Global Earth Observation System of Systems (GEOSS) with rigorous data quality specifications and quality-aware capabilities, in order to improve reliability in scientific studies and policy decision-making. GeoViQua main scientific and technical objective is to enhance the GEOSS Common Infrastructure (GCI) providing the user community with innovative quality-aware search and evaluation tools, which will be integrated in the GEO-Portal, as well as made available to other end-user interfaces. To this end, GeoViQua will promote the extension of the current standard metadata for geographic information with accurate and expressive quality indicators, also contributing to the definition of a quality label (GEOLabel). GeoViQua proposed solutions will be assessed in several pilot case studies covering the whole Earth Observation chain, from remote sensing acquisition to data processing, to applications in the main GEOSS Societal Benefit Areas. This work presents the preliminary results of GeoViQua Work Package 4 "Enhanced geo-search tools" (WP4), started in January 2012. Its major anticipated technical innovations are search and evaluation tools that communicate and exploit data quality information from the GCI. In particular, GeoViQua will investigate a graphical search interface featuring a coherent and meaningful aggregation of statistics and metadata summaries (e.g. in the form of tables, charts), thus enabling end users to leverage quality constraints for data discovery and evaluation. Preparatory work on WP4 requirements indicated that users need the "best" data for their purpose, implying a high degree of subjectivity in judgment. This suggests that the GeoViQua system should exploit a combination of provider-generated metadata (objective indicators such as summary statistics), system-generated metadata (contextual/tracking information such as provenance of data and metadata), and user-generated metadata (informal user comments, usage information, rating, etc.). Moreover, metadata should include sufficiently complete access information, to allow rich data visualization and propagation. The following main enabling components are currently identified within WP4: - Quality-aware access services, e.g. a quality-aware extension of the OGC Sensor Observation Service (SOS-Q) specification, to support quality constraints for sensor data publishing and access; - Quality-aware discovery services, namely a quality-aware extension of the OGC Catalog Service for the Web (CSW-Q), to cope with quality constrained search; - Quality-augmentation broker (GeoViQua Broker), to support the linking and combination of the existing GCI metadata with GeoViQua- and user-generated metadata required to support the users in selecting the "best" data for their intended use. We are currently developing prototypes of the above quality-enabled geo-search components, that will be assessed in a sensor-based pilot case study in the next months. In particular, the GeoViQua Broker will be integrated with the EuroGEOSS Broker, to implement CSW-Q and federate (either via distribution or harvesting schemes) quality-aware data sources, GeoViQua will constitute a valuable test-bed for advancing the current best practices and standards in geospatial quality representation and exploitation. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 265178.
Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.
Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang
2016-11-01
Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.
NASA Technical Reports Server (NTRS)
Englander, Arnold C.; Englander, Jacob A.
2017-01-01
Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.
Combining local and global limitations of visual search.
Põder, Endel
2017-04-01
There are different opinions about the roles of local interactions and central processing capacity in visual search. This study attempts to clarify the problem using a new version of relevant set cueing. A central precue indicates two symmetrical segments (that may contain a target object) within a circular array of objects presented briefly around the fixation point. The number of objects in the relevant segments, and density of objects in the array were varied independently. Three types of search experiments were run: (a) search for a simple visual feature (color, size, and orientation); (b) conjunctions of simple features; and (c) spatial configuration of simple features (rotated Ts). For spatial configuration stimuli, the results were consistent with a fixed global processing capacity and standard crowding zones. For simple features and their conjunctions, the results were different, dependent on the features involved. While color search exhibits virtually no capacity limits or crowding, search for an orientation target was limited by both. Results for conjunctions of features can be partly explained by the results from the respective features. This study shows that visual search is limited by both local interference and global capacity, and the limitations are different for different visual features.
Finding Atmospheric Composition (AC) Metadata
NASA Technical Reports Server (NTRS)
Strub, Richard F..; Falke, Stefan; Fiakowski, Ed; Kempler, Steve; Lynnes, Chris; Goussev, Oleg
2015-01-01
The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all) Earth science atmospheric composition data providers store a reference to their data at GCMD.
Automated Atmospheric Composition Dataset Level Metadata Discovery. Difficulties and Surprises
NASA Astrophysics Data System (ADS)
Strub, R. F.; Falke, S. R.; Kempler, S.; Fialkowski, E.; Goussev, O.; Lynnes, C.
2015-12-01
The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System - CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all) Earth science atmospheric composition data providers store a reference to their data at GCMD.
BingEO: Enable Distributed Earth Observation Data for Environmental Research
NASA Astrophysics Data System (ADS)
Wu, H.; Yang, C.; Xu, Y.
2010-12-01
Our planet is facing great environmental challenges including global climate change, environmental vulnerability, extreme poverty, and a shortage of clean cheap energy. To address these problems, scientists are developing various models to analysis, forecast, simulate various geospatial phenomena to support critical decision making. These models not only challenge our computing technology, but also challenge us to feed huge demands of earth observation data. Through various policies and programs, open and free sharing of earth observation data are advocated in earth science. Currently, thousands of data sources are freely available online through open standards such as Web Map Service (WMS), Web Feature Service (WFS) and Web Coverage Service (WCS). Seamless sharing and access to these resources call for a spatial Cyberinfrastructure (CI) to enable the use of spatial data for the advancement of related applied sciences including environmental research. Based on Microsoft Bing Search Engine and Bing Map, a seamlessly integrated and visual tool is under development to bridge the gap between researchers/educators and earth observation data providers. With this tool, earth science researchers/educators can easily and visually find the best data sets for their research and education. The tool includes a registry and its related supporting module at server-side and an integrated portal as its client. The proposed portal, Bing Earth Observation (BingEO), is based on Bing Search and Bing Map to: 1) Use Bing Search to discover Web Map Services (WMS) resources available over the internet; 2) Develop and maintain a registry to manage all the available WMS resources and constantly monitor their service quality; 3) Allow users to manually register data services; 4) Provide a Bing Maps-based Web application to visualize the data on a high-quality and easy-to-manipulate map platform and enable users to select the best data layers online. Given the amount of observation data accumulated already and still growing, BingEO will allow these resources to be utilized more widely, intensively, efficiently and economically in earth science applications.
Thiele, H.; Glandorf, J.; Koerting, G.; Reidegeld, K.; Blüggel, M.; Meyer, H.; Stephan, C.
2007-01-01
In today’s proteomics research, various techniques and instrumentation bioinformatics tools are necessary to manage the large amount of heterogeneous data with an automatic quality control to produce reliable and comparable results. Therefore a data-processing pipeline is mandatory for data validation and comparison in a data-warehousing system. The proteome bioinformatics platform ProteinScape has been proven to cover these needs. The reprocessing of HUPO BPP participants’ MS data was done within ProteinScape. The reprocessed information was transferred into the global data repository PRIDE. ProteinScape as a data-warehousing system covers two main aspects: archiving relevant data of the proteomics workflow and information extraction functionality (protein identification, quantification and generation of biological knowledge). As a strategy for automatic data validation, different protein search engines are integrated. Result analysis is performed using a decoy database search strategy, which allows the measurement of the false-positive identification rate. Peptide identifications across different workflows, different MS techniques, and different search engines are merged to obtain a quality-controlled protein list. The proteomics identifications database (PRIDE), as a public data repository, is an archiving system where data are finally stored and no longer changed by further processing steps. Data submission to PRIDE is open to proteomics laboratories generating protein and peptide identifications. An export tool has been developed for transferring all relevant HUPO BPP data from ProteinScape into PRIDE using the PRIDE.xml format. The EU-funded ProDac project will coordinate the development of software tools covering international standards for the representation of proteomics data. The implementation of data submission pipelines and systematic data collection in public standards–compliant repositories will cover all aspects, from the generation of MS data in each laboratory to the conversion of all the annotating information and identifications to a standardized format. Such datasets can be used in the course of publishing in scientific journals.
Combining the Bourne-Shell, sed and awk in the UNIX Environment for Language Analysis.
ERIC Educational Resources Information Center
Schmitt, Lothar M.; Christianson, Kiel T.
This document describes how to construct tools for language analysis in research and teaching using the Bourne-shell, sed, and awk, three search tools, in the UNIX operating system. Applications include: searches for words, phrases, grammatical patterns, and phonemic patterns in text; statistical analysis of text in regard to such searches,…
Global Design Optimization for Aerodynamics and Rocket Propulsion Components
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)
2000-01-01
Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.
FLASH_SSF_Aqua-FM3-MODIS_Version3C
Atmospheric Science Data Center
2018-04-04
... Tool: CERES Order Tool (netCDF) Subset Data: CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data: Earthdata Search: Order Data Guide Documents: ...
FLASH_SSF_Terra-FM1-MODIS_Version3C
Atmospheric Science Data Center
2018-04-04
... Tool: CERES Order Tool (netCDF) Subset Data: CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infrared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data: Earthdata Search: Order Data Guide Documents: ...
Global reaction to the recent outbreaks of Zika virus: Insights from a Big Data analysis.
Bragazzi, Nicola Luigi; Alicino, Cristiano; Trucchi, Cecilia; Paganino, Chiara; Barberis, Ilaria; Martini, Mariano; Sticchi, Laura; Trinka, Eugen; Brigo, Francesco; Ansaldi, Filippo; Icardi, Giancarlo; Orsi, Andrea
2017-01-01
The recent spreading of Zika virus represents an emerging global health threat. As such, it is attracting public interest worldwide, generating a great amount of related Internet searches and social media interactions. The aim of this research was to understand Zika-related digital behavior throughout the epidemic spreading and to assess its consistence with real-world epidemiological data, using a behavioral informatics and analytics approach. In this study, the global web-interest and reaction to the recently occurred outbreaks of the Zika Virus were analyzed in terms of tweets and Google Trends (GT), Google News, YouTube, and Wikipedia search queries. These data streams were mined from 1st January 2004 to 31st October 2016, with a focus on the period November 2015-October 2016. This analysis was complemented with the use of epidemiological data. Spearman's correlation was performed to correlate all Zika-related data. Moreover, a multivariate regression was performed using Zika-related search queries as a dependent variable, and epidemiological data, number of inhabitants in 2015 and Human Development Index as predictor variables. Overall 3,864,395 tweets, 284,903 accesses to Wikipedia pages dedicated to the Zika virus were analyzed during the study period. All web-data sources showed that the main spike of researches and interactions occurred in February 2016 with a second peak in August 2016. All novel data streams-related activities increased markedly during the epidemic period with respect to pre-epidemic period when no web activity was detected. Correlations between data from all these web platforms resulted very high and statistically significant. The countries in which web searches were particularly concentrated are mainly from Central and South Americas. The majority of queries concerned the symptoms of the Zika virus, its vector of transmission, and its possible effect to babies, including microcephaly. No statistically significant correlation was found between novel data streams and global real-world epidemiological data. At country level, a correlation between the digital interest towards the Zika virus and Zika incidence rate or microcephaly cases has been detected. An increasing public interest and reaction to the current Zika virus outbreak was documented by all web-data sources and a similar pattern of web reactions has been detected. The public opinion seems to be particularly worried by the alert of teratogenicity of the Zika virus. Stakeholders and health authorities could usefully exploited these internet tools for collecting the concerns of public opinion and reply to them, disseminating key information.
Global reaction to the recent outbreaks of Zika virus: Insights from a Big Data analysis
Trucchi, Cecilia; Paganino, Chiara; Barberis, Ilaria; Martini, Mariano; Sticchi, Laura; Trinka, Eugen; Brigo, Francesco; Ansaldi, Filippo; Icardi, Giancarlo; Orsi, Andrea
2017-01-01
Objective The recent spreading of Zika virus represents an emerging global health threat. As such, it is attracting public interest worldwide, generating a great amount of related Internet searches and social media interactions. The aim of this research was to understand Zika-related digital behavior throughout the epidemic spreading and to assess its consistence with real-world epidemiological data, using a behavioral informatics and analytics approach. Methods In this study, the global web-interest and reaction to the recently occurred outbreaks of the Zika Virus were analyzed in terms of tweets and Google Trends (GT), Google News, YouTube, and Wikipedia search queries. These data streams were mined from 1st January 2004 to 31st October 2016, with a focus on the period November 2015—October 2016. This analysis was complemented with the use of epidemiological data. Spearman’s correlation was performed to correlate all Zika-related data. Moreover, a multivariate regression was performed using Zika-related search queries as a dependent variable, and epidemiological data, number of inhabitants in 2015 and Human Development Index as predictor variables. Results Overall 3,864,395 tweets, 284,903 accesses to Wikipedia pages dedicated to the Zika virus were analyzed during the study period. All web-data sources showed that the main spike of researches and interactions occurred in February 2016 with a second peak in August 2016. All novel data streams-related activities increased markedly during the epidemic period with respect to pre-epidemic period when no web activity was detected. Correlations between data from all these web platforms resulted very high and statistically significant. The countries in which web searches were particularly concentrated are mainly from Central and South Americas. The majority of queries concerned the symptoms of the Zika virus, its vector of transmission, and its possible effect to babies, including microcephaly. No statistically significant correlation was found between novel data streams and global real-world epidemiological data. At country level, a correlation between the digital interest towards the Zika virus and Zika incidence rate or microcephaly cases has been detected. Conclusions An increasing public interest and reaction to the current Zika virus outbreak was documented by all web-data sources and a similar pattern of web reactions has been detected. The public opinion seems to be particularly worried by the alert of teratogenicity of the Zika virus. Stakeholders and health authorities could usefully exploited these internet tools for collecting the concerns of public opinion and reply to them, disseminating key information. PMID:28934352
Access to Land Data Products Through the Land Processes DAAC
NASA Astrophysics Data System (ADS)
Klaassen, A. L.; Gacke, C. K.
2004-12-01
The Land Processes Distributed Active Archive Center (LP DAAC) was established as part of NASA's Earth Observing System (EOS) Data and Information System (EOSDIS) initiative to process, archive, and distribute land-related data collected by EOS sensors, thereby promoting the inter-disciplinary study and understanding of the integrated Earth system. The LP DAAC is responsible for archiving, product development, distribution, and user support of Moderate Resolution Imaging Spectroradiometer (MODIS) land products derived from data acquired by the Terra and Aqua satellites and processing and distribution of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data products. These data are applied in scientific research, management of natural resources, emergency response to natural disaster, and Earth Science Education. There are several web interfaces by which the inventory may be searched and the products ordered. The LP DAAC web site (http://lpdaac.usgs.gov/) provides product-specific information and links to data access tools. The primary search and order tool is the EOS Data Gateway (EDG) (http://edcimswww.cr.usgs.gov/pub/imswelcome/) that allows users to search data holdings, retrieve descriptions of data sets, view browse images, and place orders. The EDG is the only tool to search the entire inventory of ASTER and MODIS products available from the LP DAAC. The Data Pool (http://lpdaac.usgs.gov/datapool/datapool.asp) is an online archive that provides immediate FTP access to selected LP DAAC data products. The data can be downloaded by going directly to the FTP site, where you can navigate to the desired granule, metadata file or browse image. It includes the ability to convert files from the standard HDF-EOS data format into GeoTIFF, to change the data projections, or perform spatial subsetting by using the HDF-EOS to GeoTIFF Converter (HEG) for selected data types. The Browse Tool also known as the USGS Global Visualization Viewer (http://lpdaac.usgs.gov/aster/glovis.asp) provides a easy online method to search, browse, and order the LP DAAC ASTER and MODIS land data by viewing browse images to define spatial and temporal queries. The LP DAAC User Services Office is the interface for support for the ASTER and MODIS data products and services. The user services representatives are available to answer questions, assist with ordering data, technical support and referrals, and provide information on a variety of tools available to assist in data preparation. The LP DAAC User Services contact information is: LP DAAC User Services U.S. Geological Survey EROS Data Center 47914 252nd Street Sioux Falls, SD 57198-0001 Voice: (605) 594-6116 Toll Free: 866-573-3222 Fax: 605-594-6963 E-mail: edc@eos.nasa.gov "This abstract was prepared under Contract number 03CRCN0001 between SAIC and U.S. Geological Survey. Abstract has not been reviewed for conformity with USGS editorial standards and has been submitted for approval by the USGS Director."
Larson, Heidi J; Wilson, Rose; Hanley, Sharon; Parys, Astrid; Paterson, Pauline
2014-01-01
In June 2013 the Japanese Ministry of Health, Labor, and Welfare (MHLW) suspended its HPV vaccination recommendation after a series of highly publicized alleged adverse events following immunization stoked public doubts about the vaccine's safety. This paper examines the global spread of the news of Japan's HPV vaccine suspension through online media, and takes a retrospective look at non-Japanese media sources that were used to support those claiming HPV vaccine injury in Japan. Two searches were conducted. One searched relevant content in an archive of Google Alerts on vaccines and vaccine preventable diseases. The second search was conducted using Google Search on January 6th 2014 and on July 18th 2014, using the keywords, "HPV vaccine Japan" and "cervical cancer vaccine Japan." Both searches were used as Google Searches render more (and some different) results than Google Alerts. Online media collected and analyzed totalled 57. Sixty 3 percent were published in the USA, 23% in Japan, 5% in the UK, 2% in France, 2% in Switzerland, 2% in the Philippines, 2% in Kenya and 2% in Denmark. The majority took a negative view of the HPV vaccine, the primary concern being vaccine safety. The news of Japan's suspension of the HPV vaccine recommendation has traveled globally through online media and social media networks, being applauded by anti-vaccination groups but not by the global scientific community. The longer the uncertainty around the Japanese HPV vaccine recommendation persists, the further the public concerns are likely to travel.
Larson, Heidi J; Wilson, Rose; Hanley, Sharon; Parys, Astrid; Paterson, Pauline
2014-01-01
In June 2013 the Japanese Ministry of Health, Labor, and Welfare (MHLW) suspended its HPV vaccination recommendation after a series of highly publicized alleged adverse events following immunization stoked public doubts about the vaccine's safety. This paper examines the global spread of the news of Japan's HPV vaccine suspension through online media, and takes a retrospective look at non-Japanese media sources that were used to support those claiming HPV vaccine injury in Japan. Methods: Two searches were conducted. One searched relevant content in an archive of Google Alerts on vaccines and vaccine preventable diseases. The second search was conducted using Google Search on January 6th 2014 and on July 18th 2014, using the keywords, “HPV vaccine Japan” and “cervical cancer vaccine Japan.” Both searches were used as Google Searches render more (and some different) results than Google Alerts. Results: Online media collected and analyzed totalled 57. Sixty 3 percent were published in the USA, 23% in Japan, 5% in the UK, 2% in France, 2% in Switzerland, 2% in the Philippines, 2% in Kenya and 2% in Denmark. The majority took a negative view of the HPV vaccine, the primary concern being vaccine safety. Discussion: The news of Japan's suspension of the HPV vaccine recommendation has traveled globally through online media and social media networks, being applauded by anti-vaccination groups but not by the global scientific community. The longer the uncertainty around the Japanese HPV vaccine recommendation persists, the further the public concerns are likely to travel. PMID:25483472
Aiding Design of Wave Energy Converters via Computational Simulations
NASA Astrophysics Data System (ADS)
Jebeli Aqdam, Hejar; Ahmadi, Babak; Raessi, Mehdi; Tootkaboni, Mazdak
2015-11-01
With the increasing interest in renewable energy sources, wave energy converters will continue to gain attention as a viable alternative to current electricity production methods. It is therefore crucial to develop computational tools for the design and analysis of wave energy converters. A successful design requires balance between the design performance and cost. Here an analytical solution is used for the approximate analysis of interactions between a flap-type wave energy converter (WEC) and waves. The method is verified using other flow solvers and experimental test cases. Then the model is used in conjunction with a powerful heuristic optimization engine, Charged System Search (CSS) to explore the WEC design space. CSS is inspired by charged particles behavior. It searches the design space by considering candidate answers as charged particles and moving them based on the Coulomb's laws of electrostatics and Newton's laws of motion to find the global optimum. Finally the impacts of changes in different design parameters on the power takeout of the superior WEC designs are investigated. National Science Foundation, CBET-1236462.
NASA Astrophysics Data System (ADS)
Wong, M. M.; Brennan, J.; Bagwell, R.; Behnke, J.
2015-12-01
This poster will introduce and explore the various social media efforts, monthly webinar series and a redesigned website (https://earthdata.nasa.gov) established by National Aeronautics and Space Administration's (NASA) Earth Observing System Data and Information System (EOSDIS) project. EOSDIS is a key core capability in NASA's Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA's Earth science data from various sources - satellites, aircraft, field measurements, and various other programs. It is comprised of twelve Distributed Active Archive Centers (DAACs), Science Computing Facilities (SCFs), data discovery and service access client (Reverb and Earthdata Search), dataset directory (Global Change Master Directory - GCMD), near real-time data (Land Atmosphere Near real-time Capability for EOS - LANCE), Worldview (an imagery visualization interface), Global Imagery Browse Services, the Earthdata Code Collaborative and a host of other discipline specific data discovery, data access, data subsetting and visualization tools. We have embarked on these efforts to reach out to new audiences and potential new users and to engage our diverse end user communities world-wide. One of the key objectives is to increase awareness of the breadth of Earth science data information, services, and tools that are publicly available while also highlighting how these data and technologies enable scientific research.
Spronk, Inge; Burgers, Jako S; Schellevis, François G; van Vliet, Liesbeth M; Korevaar, Joke C
2018-05-11
Shared decision-making (SDM) in the management of metastatic breast cancer care is associated with positive patient outcomes. In daily clinical practice, however, SDM is not fully integrated yet. Initiatives to improve the implementation of SDM would be helpful. The aim of this review was to assess the availability and effectiveness of tools supporting SDM in metastatic breast cancer care. Literature databases were systematically searched for articles published since 2006 focusing on the development or evaluation of tools to improve information-provision and to support decision-making in metastatic breast cancer care. Internet searches and experts identified additional tools. Data from included tools were extracted and the evaluation of tools was appraised using the GRADE grading system. The literature search yielded five instruments. In addition, two tools were identified via internet searches and consultation of experts. Four tools were specifically developed for supporting SDM in metastatic breast cancer, the other three tools focused on metastatic cancer in general. Tools were mainly applicable across the care process, and usable for decisions on supportive care with or without chemotherapy. All tools were designed for patients to be used before a consultation with the physician. Effects on patient outcomes were generally weakly positive although most tools were not studied in well-designed studies. Despite its recognized importance, only two tools were positively evaluated on effectiveness and are available to support patients with metastatic breast cancer in SDM. These tools show promising results in pilot studies and focus on different aspects of care. However, their effectiveness should be confirmed in well-designed studies before implementation in clinical practice. Innovation and development of SDM tools targeting clinicians as well as patients during a clinical encounter is recommended.
Developing and using a rubric for evaluating evidence-based medicine point-of-care tools
Foster, Margaret J
2011-01-01
Objective: The research sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library. Methods: The authors searched the literature for EBM tool evaluations and found that most previous reviews were designed to evaluate the ability of an EBM tool to answer a clinical question. The researchers' goal was to develop and complete rubrics for assessing these tools based on criteria for a general evaluation of tools (reviewing content, search options, quality control, and grading) and criteria for an evaluation of clinical summaries (searching tools for treatments of common diagnoses and evaluating summaries for quality control). Results: Differences between EBM tools' options, content coverage, and usability were minimal. However, the products' methods for locating and grading evidence varied widely in transparency and process. Conclusions: As EBM tools are constantly updating and evolving, evaluation of these tools needs to be conducted frequently. Standards for evaluating EBM tools need to be established, with one method being the use of objective rubrics. In addition, EBM tools need to provide more information about authorship, reviewers, methods for evidence collection, and grading system employed. PMID:21753917
Web Feet Guide to Search Engines: Finding It on the Net.
ERIC Educational Resources Information Center
Web Feet, 2001
2001-01-01
This guide to search engines for the World Wide Web discusses selecting the right search engine; interpreting search results; major search engines; online tutorials and guides; search engines for kids; specialized search tools for various subjects; and other specialized engines and gateways. (LRW)
Searching in clutter : visual attention strategies of expert pilots
DOT National Transportation Integrated Search
2012-10-22
Clutter can slow visual search. However, experts may develop attention strategies that alleviate the effects of clutter on search performance. In the current study we examined the effects of global and local clutter on visual search performance and a...
Measuring situation awareness in emergency settings: a systematic review of tools and outcomes
Cooper, Simon; Porter, Joanne; Peach, Linda
2014-01-01
Background Nontechnical skills have an impact on health care outcomes and improve patient safety. Situation awareness is core with the view that an understanding of the environment will influence decision-making and performance. This paper reviews and describes indirect and direct measures of situation awareness applicable for emergency settings. Methods Electronic databases and search engines were searched from 1980 to 2010, including CINAHL, Ovid Medline, Pro-Quest, Cochrane, and the search engine, Google Scholar. Access strategies included keyword, author, and journal searches. Publications identified were assessed for relevance, and analyzed and synthesized using Oxford evidence levels and the Critical Appraisal Skills Programme guidelines in order to assess their quality and rigor. Results One hundred and thirteen papers were initially identified, and reduced to 55 following title and abstract review. The final selection included 14 papers drawn from the fields of emergency medicine, intensive care, anesthetics, and surgery. Ten of these discussed four general nontechnical skill measures (including situation awareness) and four incorporated the Situation Awareness Global Assessment Technique. Conclusion A range of direct and indirect techniques for measuring situation awareness is available. In the medical literature, indirect approaches are the most common, with situation awareness measured as part of a nontechnical skills assessment. In simulation-based studies, situation awareness in emergencies tends to be suboptimal, indicating the need for improved training techniques to enhance awareness and improve decision-making. PMID:27147872
NASA Technical Reports Server (NTRS)
Rice, J. Kevin
2013-01-01
The XTCE GOVSAT software suite contains three tools: validation, search, and reporting. The Extensible Markup Language (XML) Telemetric and Command Exchange (XTCE) GOVSAT Tool Suite is written in Java for manipulating XTCE XML files. XTCE is a Consultative Committee for Space Data Systems (CCSDS) and Object Management Group (OMG) specification for describing the format and information in telemetry and command packet streams. These descriptions are files that are used to configure real-time telemetry and command systems for mission operations. XTCE s purpose is to exchange database information between different systems. XTCE GOVSAT consists of rules for narrowing the use of XTCE for missions. The Validation Tool is used to syntax check GOVSAT XML files. The Search Tool is used to search (i.e. command and telemetry mnemonics) the GOVSAT XML files and view the results. Finally, the Reporting Tool is used to create command and telemetry reports. These reports can be displayed or printed for use by the operations team.
Hyperspace geography: visualizing fitness landscapes beyond 4D.
Wiles, Janet; Tonkes, Bradley
2006-01-01
Human perception is finely tuned to extract structure about the 4D world of time and space as well as properties such as color and texture. Developing intuitions about spatial structure beyond 4D requires exploiting other perceptual and cognitive abilities. One of the most natural ways to explore complex spaces is for a user to actively navigate through them, using local explorations and global summaries to develop intuitions about structure, and then testing the developing ideas by further exploration. This article provides a brief overview of a technique for visualizing surfaces defined over moderate-dimensional binary spaces, by recursively unfolding them onto a 2D hypergraph. We briefly summarize the uses of a freely available Web-based visualization tool, Hyperspace Graph Paper (HSGP), for exploring fitness landscapes and search algorithms in evolutionary computation. HSGP provides a way for a user to actively explore a landscape, from simple tasks such as mapping the neighborhood structure of different points, to seeing global properties such as the size and distribution of basins of attraction or how different search algorithms interact with landscape structure. It has been most useful for exploring recursive and repetitive landscapes, and its strength is that it allows intuitions to be developed through active navigation by the user, and exploits the visual system's ability to detect pattern and texture. The technique is most effective when applied to continuous functions over Boolean variables using 4 to 16 dimensions.
Weil, Alexander G; Bojanowski, Michel W; Jamart, Jacques; Gustin, Thierry; Lévêque, Marc
2014-01-01
To evaluate the quality of information available on the Internet to patients with a cervical pathology undergoing elective cervical spine surgery. Six key words ("cervical discectomy," "cervical foraminotomy," "cervical fusion," "cervical disc replacement," "cervical arthroplasty," "cervical artificial disc") were entered into two different search engines (Google, Yahoo!). For each key word, the first 50 websites were evaluated for accessibility, comprehensibility, and website quality using the DISCERN tool, transparency and honesty criteria, and an accuracy and exhaustivity scale. Of 5,098,500 evaluable websites, 600 were visited; 97 (16%) of these websites were evaluated for quality and comprehensiveness. Overall, 3% of sites obtained an excellent global quality score, 7% obtained a good score, 25% obtained an above average score, 15% obtained an average score, 37% obtained a poor score, and 13% obtained a very poor score. High-quality websites were affiliated with a professional society (P = 0.021), had bibliographical references (P = 0.030), and had a recent update within 6 months (r = 0.277, P < 0.001). No correlation between global quality score and other variables was observed. This study shows that the search for medical information on the Internet is time-consuming and often disappointing. The Internet is a potentially misleading source of information. Surgeons and professional societies must use the Internet as an ally in providing optimal information to patients. Copyright © 2014. Published by Elsevier Inc.
SGO: A fast engine for ab initio atomic structure global optimization by differential evolution
NASA Astrophysics Data System (ADS)
Chen, Zhanghui; Jia, Weile; Jiang, Xiangwei; Li, Shu-Shen; Wang, Lin-Wang
2017-10-01
As the high throughout calculations and material genome approaches become more and more popular in material science, the search for optimal ways to predict atomic global minimum structure is a high research priority. This paper presents a fast method for global search of atomic structures at ab initio level. The structures global optimization (SGO) engine consists of a high-efficiency differential evolution algorithm, accelerated local relaxation methods and a plane-wave density functional theory code running on GPU machines. The purpose is to show what can be achieved by combining the superior algorithms at the different levels of the searching scheme. SGO can search the global-minimum configurations of crystals, two-dimensional materials and quantum clusters without prior symmetry restriction in a relatively short time (half or several hours for systems with less than 25 atoms), thus making such a task a routine calculation. Comparisons with other existing methods such as minima hopping and genetic algorithm are provided. One motivation of our study is to investigate the properties of magnetic systems in different phases. The SGO engine is capable of surveying the local minima surrounding the global minimum, which provides the information for the overall energy landscape of a given system. Using this capability we have found several new configurations for testing systems, explored their energy landscape, and demonstrated that the magnetic moment of metal clusters fluctuates strongly in different local minima.
Suzuki, Hirofumi; Kawabata, Takeshi; Nakamura, Haruki
2016-02-15
Omokage search is a service to search the global shape similarity of biological macromolecules and their assemblies, in both the Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB). The server compares global shapes of assemblies independent of sequence order and number of subunits. As a search query, the user inputs a structure ID (PDB ID or EMDB ID) or uploads an atomic model or 3D density map to the server. The search is performed usually within 1 min, using one-dimensional profiles (incremental distance rank profiles) to characterize the shapes. Using the gmfit (Gaussian mixture model fitting) program, the found structures are fitted onto the query structure and their superimposed structures are displayed on the Web browser. Our service provides new structural perspectives to life science researchers. Omokage search is freely accessible at http://pdbj.org/omokage/. © The Author 2015. Published by Oxford University Press.
Use of Semantic Technology to Create Curated Data Albums
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Sainju, Roshan; Bakare, Rohan; Basyal, Sabin
2014-01-01
One of the continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available online. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the data sets they need can obtain the specific files using these systems. However, in cases where researchers are interested in studying an event of research interest, they must manually assemble a variety of relevant data sets by searching the different distributed data systems. Consequently, there is a need to design and build specialized search and discover tools in Earth science that can filter through large volumes of distributed online data and information and only aggregate the relevant resources needed to support climatology and case studies. This paper presents a specialized search and discovery tool that automatically creates curated Data Albums. The tool was designed to enable key elements of the search process such as dynamic interaction and sense-making. The tool supports dynamic interaction via different modes of interactivity and visual presentation of information. The compilation of information and data into a Data Album is analogous to a shoebox within the sense-making framework. This tool automates most of the tedious information/data gathering tasks for researchers. Data curation by the tool is achieved via an ontology-based, relevancy ranking algorithm that filters out nonrelevant information and data. The curation enables better search results as compared to the simple keyword searches provided by existing data systems in Earth science.
Use of Semantic Technology to Create Curated Data Albums
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Sainju, Roshan; Bakare, Rohan; Basyal, Sabin; Fox, Peter (Editor); Norack, Tom (Editor)
2014-01-01
One of the continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available online. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the data sets they need can obtain the specific files using these systems. However, in cases where researchers are interested in studying an event of research interest, they must manually assemble a variety of relevant data sets by searching the different distributed data systems. Consequently, there is a need to design and build specialized search and discovery tools in Earth science that can filter through large volumes of distributed online data and information and only aggregate the relevant resources needed to support climatology and case studies. This paper presents a specialized search and discovery tool that automatically creates curated Data Albums. The tool was designed to enable key elements of the search process such as dynamic interaction and sense-making. The tool supports dynamic interaction via different modes of interactivity and visual presentation of information. The compilation of information and data into a Data Album is analogous to a shoebox within the sense-making framework. This tool automates most of the tedious information/data gathering tasks for researchers. Data curation by the tool is achieved via an ontology-based, relevancy ranking algorithm that filters out non-relevant information and data. The curation enables better search results as compared to the simple keyword searches provided by existing data systems in Earth science.
Förster, Jens
2009-02-01
Nine studies showed a bidirectional link (a) between a global processing style and generation of similarities and (b) between a local processing style and generation of dissimilarities. In Experiments 1-4, participants were primed with global versus local perception styles and then asked to work on an allegedly unrelated generation task. Across materials, participants generated more similarities than dissimilarities after global priming, whereas for participants with local priming, the opposite was true. Experiments 5-6 demonstrated a bidirectional link whereby participants who were first instructed to search for similarities attended more to the gestalt of a stimulus than to its details, whereas the reverse was true for those who were initially instructed to search for dissimilarities. Because important psychological variables are correlated with processing styles, in Experiments 7-9, temporal distance, a promotion focus, and high power were predicted and shown to enhance the search for similarities, whereas temporal proximity, a prevention focus, and low power enhanced the search for dissimilarities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
ESIP Documentation Cluster Session: GCMD Keyword Update
NASA Technical Reports Server (NTRS)
Stevens, Tyler
2018-01-01
The Global Change Master Directory (GCMD) Keywords are a hierarchical set of controlled Earth Science vocabularies that help ensure Earth science data and services are described in a consistent and comprehensive manner and allow for the precise searching of collection-level metadata and subsequent retrieval of data and services. Initiated over twenty years ago, the GCMD Keywords are periodically analyzed for relevancy and will continue to be refined and expanded in response to user needs. This talk explores the current status of the GCMD keywords, the value and usage that the keywords bring to different tools/agencies as it relates to data discovery, and how the keywords relate to SWEET (Semantic Web for Earth and Environmental Terminology) Ontologies.
A portal for the ocean biogeographic information system
Zhang, Yunqing; Grassle, J. F.
2002-01-01
Since its inception in 1999 the Ocean Biogeographic Information System (OBIS) has developed into an international science program as well as a globally distributed network of biogeographic databases. An OBIS portal at Rutgers University provides the links and functional interoperability among member database systems. Protocols and standards have been established to support effective communication between the portal and these functional units. The portal provides distributed data searching, a taxonomy name service, a GIS with access to relevant environmental data, biological modeling, and education modules for mariners, students, environmental managers, and scientists. The portal will integrate Census of Marine Life field projects, national data archives, and other functional modules, and provides for network-wide analyses and modeling tools.
A strategy to find minimal energy nanocluster structures.
Rogan, José; Varas, Alejandro; Valdivia, Juan Alejandro; Kiwi, Miguel
2013-11-05
An unbiased strategy to search for the global and local minimal energy structures of free standing nanoclusters is presented. Our objectives are twofold: to find a diverse set of low lying local minima, as well as the global minimum. To do so, we use massively the fast inertial relaxation engine algorithm as an efficient local minimizer. This procedure turns out to be quite efficient to reach the global minimum, and also most of the local minima. We test the method with the Lennard-Jones (LJ) potential, for which an abundant literature does exist, and obtain novel results, which include a new local minimum for LJ13 , 10 new local minima for LJ14 , and thousands of new local minima for 15≤N≤65. Insights on how to choose the initial configurations, analyzing the effectiveness of the method in reaching low-energy structures, including the global minimum, are developed as a function of the number of atoms of the cluster. Also, a novel characterization of the potential energy surface, analyzing properties of the local minima basins, is provided. The procedure constitutes a promising tool to generate a diverse set of cluster conformations, both two- and three-dimensional, that can be used as an input for refinement by means of ab initio methods. Copyright © 2013 Wiley Periodicals, Inc.
Global Futures: The Emerging Scenario.
ERIC Educational Resources Information Center
Seth, Satish C.
1983-01-01
Acknowledging global interdependence, especially in economics, may be the most important step toward resolving international conflicts. Describes seven major global dangers and gives scenarios for exploring likely global futures. As "tools of prescription" these global models are inadequate, but as "tools of analysis" they have…
RNA motif search with data-driven element ordering.
Rampášek, Ladislav; Jimenez, Randi M; Lupták, Andrej; Vinař, Tomáš; Brejová, Broňa
2016-05-18
In this paper, we study the problem of RNA motif search in long genomic sequences. This approach uses a combination of sequence and structure constraints to uncover new distant homologs of known functional RNAs. The problem is NP-hard and is traditionally solved by backtracking algorithms. We have designed a new algorithm for RNA motif search and implemented a new motif search tool RNArobo. The tool enhances the RNAbob descriptor language, allowing insertions in helices, which enables better characterization of ribozymes and aptamers. A typical RNA motif consists of multiple elements and the running time of the algorithm is highly dependent on their ordering. By approaching the element ordering problem in a principled way, we demonstrate more than 100-fold speedup of the search for complex motifs compared to previously published tools. We have developed a new method for RNA motif search that allows for a significant speedup of the search of complex motifs that include pseudoknots. Such speed improvements are crucial at a time when the rate of DNA sequencing outpaces growth in computing. RNArobo is available at http://compbio.fmph.uniba.sk/rnarobo .
PIPI: PTM-Invariant Peptide Identification Using Coding Method.
Yu, Fengchao; Li, Ning; Yu, Weichuan
2016-12-02
In computational proteomics, the identification of peptides with an unlimited number of post-translational modification (PTM) types is a challenging task. The computational cost associated with database search increases exponentially with respect to the number of modified amino acids and linearly with respect to the number of potential PTM types at each amino acid. The problem becomes intractable very quickly if we want to enumerate all possible PTM patterns. To address this issue, one group of methods named restricted tools (including Mascot, Comet, and MS-GF+) only allow a small number of PTM types in database search process. Alternatively, the other group of methods named unrestricted tools (including MS-Alignment, ProteinProspector, and MODa) avoids enumerating PTM patterns with an alignment-based approach to localizing and characterizing modified amino acids. However, because of the large search space and PTM localization issue, the sensitivity of these unrestricted tools is low. This paper proposes a novel method named PIPI to achieve PTM-invariant peptide identification. PIPI belongs to the category of unrestricted tools. It first codes peptide sequences into Boolean vectors and codes experimental spectra into real-valued vectors. For each coded spectrum, it then searches the coded sequence database to find the top scored peptide sequences as candidates. After that, PIPI uses dynamic programming to localize and characterize modified amino acids in each candidate. We used simulation experiments and real data experiments to evaluate the performance in comparison with restricted tools (i.e., Mascot, Comet, and MS-GF+) and unrestricted tools (i.e., Mascot with error tolerant search, MS-Alignment, ProteinProspector, and MODa). Comparison with restricted tools shows that PIPI has a close sensitivity and running speed. Comparison with unrestricted tools shows that PIPI has the highest sensitivity except for Mascot with error tolerant search and ProteinProspector. These two tools simplify the task by only considering up to one modified amino acid in each peptide, which results in a higher sensitivity but has difficulty in dealing with multiple modified amino acids. The simulation experiments also show that PIPI has the lowest false discovery proportion, the highest PTM characterization accuracy, and the shortest running time among the unrestricted tools.
Hinton, Elizabeth G; Oelschlegel, Sandra; Vaughn, Cynthia J; Lindsay, J Michael; Hurst, Sachiko M; Earl, Martha
2013-01-01
This study utilizes an informatics tool to analyze a robust literature search service in an academic medical center library. Structured interviews with librarians were conducted focusing on the benefits of such a tool, expectations for performance, and visual layout preferences. The resulting application utilizes Microsoft SQL Server and .Net Framework 3.5 technologies, allowing for the use of a web interface. Customer tables and MeSH terms are included. The National Library of Medicine MeSH database and entry terms for each heading are incorporated, resulting in functionality similar to searching the MeSH database through PubMed. Data reports will facilitate analysis of the search service.
An efficient and practical approach to obtain a better optimum solution for structural optimization
NASA Astrophysics Data System (ADS)
Chen, Ting-Yu; Huang, Jyun-Hao
2013-08-01
For many structural optimization problems, it is hard or even impossible to find the global optimum solution owing to unaffordable computational cost. An alternative and practical way of thinking is thus proposed in this research to obtain an optimum design which may not be global but is better than most local optimum solutions that can be found by gradient-based search methods. The way to reach this goal is to find a smaller search space for gradient-based search methods. It is found in this research that data mining can accomplish this goal easily. The activities of classification, association and clustering in data mining are employed to reduce the original design space. For unconstrained optimization problems, the data mining activities are used to find a smaller search region which contains the global or better local solutions. For constrained optimization problems, it is used to find the feasible region or the feasible region with better objective values. Numerical examples show that the optimum solutions found in the reduced design space by sequential quadratic programming (SQP) are indeed much better than those found by SQP in the original design space. The optimum solutions found in a reduced space by SQP sometimes are even better than the solution found using a hybrid global search method with approximate structural analyses.
Pineo, Helen; Glonti, Ketevan; Rutter, Harry; Zimmermann, Nicole; Wilkinson, Paul; Davies, Michael
2017-01-13
There is wide agreement that there is a lack of attention to health in municipal environmental policy-making, such as urban planning and regeneration. Explanations for this include differing professional norms between health and urban environment professionals, system complexity and limited evidence for causality between attributes of the built environment and health outcomes. Data from urban health indicator (UHI) tools are potentially a valuable form of evidence for local government policy and decision-makers. Although many UHI tools have been specifically developed to inform policy, there is poor understanding of how they are used. This study aims to identify the nature and characteristics of UHI tools and their use by municipal built environment policy and decision-makers. Health and social sciences databases (ASSIA, Campbell Library, EMBASE, MEDLINE, Scopus, Social Policy and Practice and Web of Science Core Collection) will be searched for studies using UHI tools alongside hand-searching of key journals and citation searches of included studies. Advanced searches of practitioner websites and Google will also be used to find grey literature. Search results will be screened for UHI tools, and for studies which report on or evaluate the use of such tools. Data about UHI tools will be extracted to compile a census and taxonomy of existing tools based on their specific characteristics and purpose. In addition, qualitative and quantitative studies about the use of these tools will be appraised using quality appraisal tools produced by the UK National Institute for Health and Care Excellence (NICE) and synthesised in order to gain insight into the perceptions, value and use of UHI tools in the municipal built environment policy and decision-making process. This review is not registered with PROSPERO. This systematic review focuses specifically on UHI tools that assess the physical environment's impact on health (such as transport, housing, air quality and greenspace). This study will help indicator producers understand whether this form of evidence is of value to built environment policy and decision-makers and how such tools should be tailored for this audience. N/A.
SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters.
Wang, Chunlin; Lefkowitz, Elliot J
2004-10-28
Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist.
SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters
Wang, Chunlin; Lefkowitz, Elliot J
2004-01-01
Background Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. Results We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Conclusions Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist. PMID:15511296
Contextual cueing by global features
Kunar, Melina A.; Flusberg, Stephen J.; Wolfe, Jeremy M.
2008-01-01
In visual search tasks, attention can be guided to a target item, appearing amidst distractors, on the basis of simple features (e.g. find the red letter among green). Chun and Jiang’s (1998) “contextual cueing” effect shows that RTs are also speeded if the spatial configuration of items in a scene is repeated over time. In these studies we ask if global properties of the scene can speed search (e.g. if the display is mostly red, then the target is at location X). In Experiment 1a, the overall background color of the display predicted the target location. Here the predictive color could appear 0, 400 or 800 msec in advance of the search array. Mean RTs are faster in predictive than in non-predictive conditions. However, there is little improvement in search slopes. The global color cue did not improve search efficiency. Experiments 1b-1f replicate this effect using different predictive properties (e.g. background orientation/texture, stimuli color etc.). The results show a strong RT effect of predictive background but (at best) only a weak improvement in search efficiency. A strong improvement in efficiency was found, however, when the informative background was presented 1500 msec prior to the onset of the search stimuli and when observers were given explicit instructions to use the cue (Experiment 2). PMID:17355043
NASA Astrophysics Data System (ADS)
Newman, D. J.; Mitchell, A. E.
2015-12-01
At AGU 2014, NASA EOSDIS demonstrated a case-study of an OpenSearch framework for Earth science data discovery. That framework leverages the IDN and CWIC OpenSearch API implementations to provide seamless discovery of data through the 'two-step' discovery process as outlined by the Federation for Earth Sciences (ESIP) OpenSearch Best Practices. But how would an Earth Scientist leverage this framework and what are the benefits? Using a client that understands the OpenSearch specification and, for further clarity, the various best practices and extensions, a scientist can discovery a plethora of data not normally accessible either by traditional methods (NASA Earth Data Search, Reverb, etc) or direct methods (going to the source of the data) We will demonstrate, via the CWICSmart web client, how an earth scientist can access regional data on a regional phenomena in a uniform and aggregated manner. We will demonstrate how an earth scientist can 'globalize' their discovery. You want to find local data on 'sea surface temperature of the Indian Ocean'? We can help you with that. 'European meteorological data'? Yes. 'Brazilian rainforest satellite imagery'? That too. CWIC allows you to get earth science data in a uniform fashion from a large number of disparate, world-wide agencies. This is what we mean by Global OpenSearch.
Parameterizing sorption isotherms using a hybrid global-local fitting procedure.
Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J
2017-05-01
Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.
Clark, Megan; Raffray, Marie; Hendricks, Kristin; Gagnon, Anita J
2016-05-01
Nurses are learning and practicing in an increasingly global world. Both nursing schools and nursing students are seeking guidance as they integrate global health into their learning and teaching. This systematic review is intended to identify the most common global and public health core competencies found in the literature and better inform schools of nursing wishing to include global health content in their curricula. Systematic review. An online search of CINAHL and Medline databases, as well as, inclusion of pertinent gray literature was conducted for articles published before 2013. Relevant literature for global health (GH) and public and community health (PH/CH) competencies was reviewed to determine recommendations of both competencies using a combination of search terms. Studies must have addressed competencies as defined in the literature and must have been pertinent to GH or PH/CH. The databases were systematically searched and after reading the full content of the included studies, key concepts were extracted and synthesized. Twenty-five studies were identified and resulted in a list of 14 global health core competencies. These competencies are applicable to a variety of health disciplines, but particularly can inform the efforts of nursing schools to integrate global health concepts into their curricula. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.
Dash, Tirtharaj; Sahu, Prabhat K
2015-05-30
The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.
Bibliometric analysis of nanotechnology applied in oncology from 2002 to 2011.
Dong, Xifeng; Qiu, Xiao-chun; Liu, Qian; Jia, Jack
2013-12-01
Innovation in the last decade has endowed nanotechnology with an assortment of tools for drug delivery system, imaging, and sensing in cancer research. These rapidly emerging tools are indicative of a burgeoning field ready to expand into medical applications. The aim of this study is to analyze the applications of nanotechnology in oncology with bibliometric methods and evaluate development in this field. Literature search was performed using PubMed search engines with MeSH terms (all)--nanotechnology, nanomedicine, nanoparticle, nanocapsules, micellar systems, and oncology or cancer or neoplasms. Within 2,543 articles from 2002 to 2011 in over 50 medical magazines from over 30 countries, we did a series analysis on these articles' countries, keywords, and authors. Our results show that articles in nanotechnology in oncology are increasing year by year, especially in recent years. Quantity and quality of the articles are becoming more and influential. In the global research, the USA is leading in this field, accounting for half above of the whole articles, followed by countries like Japan, Germany, and France and also some emerging nations like China, in the second place, and India. Subjects like nanoparticles, tumor marker, and drug delivery are the common research focus. So, with more and more scientists' interests and attention drawn to this field, it is likely to make major breakthroughs in the coming years.
Dynamic least-cost optimisation of wastewater system remedial works requirements.
Vojinovic, Z; Solomatine, D; Price, R K
2006-01-01
In recent years, there has been increasing concern for wastewater system failure and identification of optimal set of remedial works requirements. So far, several methodologies have been developed and applied in asset management activities by various water companies worldwide, but often with limited success. In order to fill the gap, there are several research projects that have been undertaken in exploring various algorithms to optimise remedial works requirements, but mostly for drinking water supply systems, and very limited work has been carried out for the wastewater assets. Some of the major deficiencies of commonly used methods can be found in either one or more of the following aspects: inadequate representation of systems complexity, incorporation of a dynamic model into the decision-making loop, the choice of an appropriate optimisation technique and experience in applying that technique. This paper is oriented towards resolving these issues and discusses a new approach for the optimisation of wastewater systems remedial works requirements. It is proposed that the optimal problem search is performed by a global optimisation tool (with various random search algorithms) and the system performance is simulated by the hydrodynamic pipe network model. The work on assembling all required elements and the development of an appropriate interface protocols between the two tools, aimed to decode the potential remedial solutions into the pipe network model and to calculate the corresponding scenario costs, is currently underway.
Weighted Global Artificial Bee Colony Algorithm Makes Gas Sensor Deployment Efficient
Jiang, Ye; He, Ziqing; Li, Yanhai; Xu, Zhengyi; Wei, Jianming
2016-01-01
This paper proposes an improved artificial bee colony algorithm named Weighted Global ABC (WGABC) algorithm, which is designed to improve the convergence speed in the search stage of solution search equation. The new method not only considers the effect of global factors on the convergence speed in the search phase, but also provides the expression of global factor weights. Experiment on benchmark functions proved that the algorithm can improve the convergence speed greatly. We arrive at the gas diffusion concentration based on the theory of CFD and then simulate the gas diffusion model with the influence of buildings based on the algorithm. Simulation verified the effectiveness of the WGABC algorithm in improving the convergence speed in optimal deployment scheme of gas sensors. Finally, it is verified that the optimal deployment method based on WGABC algorithm can improve the monitoring efficiency of sensors greatly as compared with the conventional deployment methods. PMID:27322262
2009-12-01
type of information available through DISA search tools: Centralized Search, Federated Search , and Enterprise Search (Defense Information Systems... Federated Search , and Enterprise 41 Search services. Likewise, EFD and GCDS support COIs in discovering information by making information
Scalable global grid catalogue for Run3 and beyond
NASA Astrophysics Data System (ADS)
Martinez Pedreira, M.; Grigoras, C.;
2017-10-01
The AliEn (ALICE Environment) file catalogue is a global unique namespace providing mapping between a UNIX-like logical name structure and the corresponding physical files distributed over 80 storage elements worldwide. Powerful search tools and hierarchical metadata information are integral parts of the system and are used by the Grid jobs as well as local users to store and access all files on the Grid storage elements. The catalogue has been in production since 2005 and over the past 11 years has grown to more than 2 billion logical file names. The backend is a set of distributed relational databases, ensuring smooth growth and fast access. Due to the anticipated fast future growth, we are looking for ways to enhance the performance and scalability by simplifying the catalogue schema while keeping the functionality intact. We investigated different backend solutions, such as distributed key value stores, as replacement for the relational database. This contribution covers the architectural changes in the system, together with the technology evaluation, benchmark results and conclusions.
InChI in the wild: an assessment of InChIKey searching in Google
2013-01-01
While chemical databases can be queried using the InChI string and InChIKey (IK) the latter was designed for open-web searching. It is becoming increasingly effective for this since more sources enhance crawling of their websites by the Googlebot and consequent IK indexing. Searchers who use Google as an adjunct to database access may be less familiar with the advantages of using the IK as explored in this review. As an example, the IK for atorvastatin retrieves ~200 low-redundancy links from a Google search in 0.3 of a second. These include most major databases and a very low false-positive rate. Results encompass less familiar but potentially useful sources and can be extended to isomer capture by using just the skeleton layer of the IK. Google Advanced Search can be used to filter large result sets. Image searching with the IK is also effective and complementary to open-web queries. Results can be particularly useful for less-common structures as exemplified by a major metabolite of atorvastatin giving only three hits. Testing also demonstrated document-to-document and document-to-database joins via structure matching. The necessary generation of an IK from chemical names can be accomplished using open tools and resources for patents, papers, abstracts or other text sources. Active global sharing of local IK-linked information can be accomplished via surfacing in open laboratory notebooks, blogs, Twitter, figshare and other routes. While information-rich chemistry (e.g. approved drugs) can exhibit swamping and redundancy effects, the much smaller IK result sets for link-poor structures become a transformative first-pass option. The IK indexing has therefore turned Google into a de-facto open global chemical information hub by merging links to most significant sources, including over 50 million PubChem and ChemSpider records. The simplicity, specificity and speed of matching make it a useful option for biologists or others less familiar with chemical searching. However, compared to rigorously maintained major databases, users need to be circumspect about the consistency of Google results and provenance of retrieved links. In addition, community engagement may be necessary to ameliorate possible future degradation of utility. PMID:23399051
Locating Biodiversity Data Through The Global Change Master Directory
NASA Technical Reports Server (NTRS)
Olsen, Lola M.
1998-01-01
The Global Change Master Directory (GCMD) presently holds descriptions for almost 7000 data sets held worldwide. The directory's primary purpose is for data discovery. The information provided through the GCMD's Directory Interchange Format (DIF) is the set of information that a researcher would need to determine if a particular data set could be of value. By offering data set descriptions worldwide in many scientific disciplines - including meteorology, oceanography, ecology, geology, hydrology, geophysics, remote sensing, paleoclimate, solar-terrestrial physics, and human dimensions of climate change - the GCMD simplifies the discovery of data sources. Direct linkages to many of the data sets are also provided. In addition, several data set registration tools are offered for populating the directory. To search the directory, one may choose the Guided Search or Free-Text Search. Two experimental interfaces were also made available with the latest software release - one based on a keyword search and another based on a graphical interface. The graphical interface was designed in collaboration with the Human Computer Interaction Laboratory at the University of Maryland. The latest version of the software, Version 6, was released in April, 1998. It features the implementation of a scheme to handle hierarchical data set collections (parent-child relationships); a hierarchical geospatial location search scheme; a Java-based geographic map for conducting geospatial searches; a Related-URL field for project-related data set collections, metadata extensions (such as more detailed inventory information), etc.; a new implementation of the Isite software; a new dataset language field; hyperlinked email addresses, and more. The key to the continued evolution of the GCMD is in the flexibility of the GCMD database, allowing modifications and additions to made relatively easily to maintain currency, thus providing the ability to capitalize on current technology while importing all existing records. Changes are discussed and approved through an online "interoperability" forum. The next major release of the GCMD is scheduled for early 1999 and will include the incorporation of a new matrix-based interface, a rapid valids-based query system; improvement in the operations facility - important for future distributed options; new streamlined code for greater performance and maintainability; improvements in the handling of seven current fields proposed through the interoperability forum (at no expense to the data providers); and the release of DOCmorph, a more robust version of DIFmorph to translate many 'standards' multi-directionally. Issues and actions will also be addressed.
Fast online and index-based algorithms for approximate search of RNA sequence-structure patterns
2013-01-01
Background It is well known that the search for homologous RNAs is more effective if both sequence and structure information is incorporated into the search. However, current tools for searching with RNA sequence-structure patterns cannot fully handle mutations occurring on both these levels or are simply not fast enough for searching large sequence databases because of the high computational costs of the underlying sequence-structure alignment problem. Results We present new fast index-based and online algorithms for approximate matching of RNA sequence-structure patterns supporting a full set of edit operations on single bases and base pairs. Our methods efficiently compute semi-global alignments of structural RNA patterns and substrings of the target sequence whose costs satisfy a user-defined sequence-structure edit distance threshold. For this purpose, we introduce a new computing scheme to optimally reuse the entries of the required dynamic programming matrices for all substrings and combine it with a technique for avoiding the alignment computation of non-matching substrings. Our new index-based methods exploit suffix arrays preprocessed from the target database and achieve running times that are sublinear in the size of the searched sequences. To support the description of RNA molecules that fold into complex secondary structures with multiple ordered sequence-structure patterns, we use fast algorithms for the local or global chaining of approximate sequence-structure pattern matches. The chaining step removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our improved online algorithm is faster than the best previous method by up to factor 45. Our best new index-based algorithm achieves a speedup of factor 560. Conclusions The presented methods achieve considerable speedups compared to the best previous method. This, together with the expected sublinear running time of the presented index-based algorithms, allows for the first time approximate matching of RNA sequence-structure patterns in large sequence databases. Beyond the algorithmic contributions, we provide with RaligNAtor a robust and well documented open-source software package implementing the algorithms presented in this manuscript. The RaligNAtor software is available at http://www.zbh.uni-hamburg.de/ralignator. PMID:23865810
Evaluation of Federated Searching Options for the School Library
ERIC Educational Resources Information Center
Abercrombie, Sarah E.
2008-01-01
Three hosted federated search tools, Follett One Search, Gale PowerSearch Plus, and WebFeat Express, were configured and implemented in a school library. Databases from five vendors and the OPAC were systematically searched. Federated search results were compared with each other and to the results of the same searches in the database's native…
Zeidán-Chuliá, Fares; Gürsoy, Mervi; Neves de Oliveira, Ben-Hur; Özdemir, Vural; Könönen, Eija; Gürsoy, Ulvi K
2015-01-01
Periodontitis, a formidable global health burden, is a common chronic disease that destroys tooth-supporting tissues. Biomarkers of the early phase of this progressive disease are of utmost importance for global health. In this context, saliva represents a non-invasive biosample. By using systems biology tools, we aimed to (1) identify an integrated interactome between matrix metalloproteinase (MMP)-REDOX/nitric oxide (NO) and apoptosis upstream pathways of periodontal inflammation, and (2) characterize the attendant topological network properties to uncover putative biomarkers to be tested in saliva from patients with periodontitis. Hence, we first generated a protein-protein network model of interactions ("BIOMARK" interactome) by using the STRING 10 database, a search tool for the retrieval of interacting genes/proteins, with "Experiments" and "Databases" as input options and a confidence score of 0.400. Second, we determined the centrality values (closeness, stress, degree or connectivity, and betweenness) for the "BIOMARK" members by using the Cytoscape software. We found Ubiquitin C (UBC), Jun proto-oncogene (JUN), and matrix metalloproteinase-14 (MMP14) as the most central hub- and non-hub-bottlenecks among the 211 genes/proteins of the whole interactome. We conclude that UBC, JUN, and MMP14 are likely an optimal candidate group of host-derived biomarkers, in combination with oral pathogenic bacteria-derived proteins, for detecting periodontitis at its early phase by using salivary samples from patients. These findings therefore have broader relevance for systems medicine in global health as well.
Water fluoridation and the quality of information available online.
Frangos, Zachary; Steffens, Maryke; Leask, Julie
2018-02-13
The Internet has transformed the way in which people approach their health care, with online resources becoming a primary source of health information. Little work has assessed the quality of online information regarding community water fluoridation. This study sought to assess the information available to individuals searching online for information, with emphasis on the credibility and quality of websites. We identified the top 10 web pages returned from different search engines, using common fluoridation search terms (identified in Google Trends). Web pages were scored using a credibility, quality and health literacy tool based on Global Advisory Committee on Vaccine Safety (GAVCS) and Center for Disease Control and Prevention (CDC) criteria. Scores were compared according to their fluoridation stance and domain type, then ranked by quality. The functionality of the scoring tool was analysed via a Bland-Altman plot of inter-rater reliability. Five-hundred web pages were returned, of which 55 were scored following removal of duplicates and irrelevant pages. Of these, 28 (51%) were pro-fluoridation, 16 (29%) were neutral and 11 (20%) were anti-fluoridation. Pro, neutral and anti-fluoridation pages scored well against health literacy standards (0.91, 0.90 and 0.81/1 respectively). Neutral and pro-fluoridation web pages showed strong credibility, with mean scores of 0.80 and 0.85 respectively, while anti-fluoridation scored 0.62/1. Most pages scored poorly for content quality, providing a moderate amount of superficial information. Those seeking online information regarding water fluoridation are faced with comprehensible, yet poorly referenced, superficial information. Sites were credible and user friendly; however, our results suggest that online resources need to focus on providing more transparent information with appropriate figures to consolidate the information. © 2018 FDI World Dental Federation.
SA-Search: a web tool for protein structure mining based on a Structural Alphabet
Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre
2004-01-01
SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search. PMID:15215446
SA-Search: a web tool for protein structure mining based on a Structural Alphabet.
Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre
2004-07-01
SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search.
Gaps in studies of global health education: an empirical literature review.
Liu, Yan; Zhang, Ying; Liu, Zhaolan; Wang, JianLi
2015-01-01
Global health has stimulated a lot of students and has attracted the interest of many faculties, thereby initiating the establishment of many academic programs on global health research and education. global health education reflects the increasing attention toward social accountability in medical education. This study aims to identify gaps in the studies on global health education. A critical literature review of empirical studies was conducted using Boolean search techniques. A total of 238 articles, including 16 reviews, were identified. There had been a boom in the numbers of studies on global health education since 2010. Four gaps were summarized. First, 94.6% of all studies on global health education were conducted in North American and European countries, of which 65.6% were carried out in the United States, followed by Canada (14.3%) and the United Kingdom (9.2%). Only seven studies (2.9%) were conducted in Asian countries, five (2.1%) in Oceania, and two (0.8%) in South American/Caribbean countries. A total of 154 studies (64.4%) were qualitative studies and 64 studies (26.8%) were quantitative studies. Second, elective courses and training or programs were the most frequently used approach for global health education. Third, there was a gap in the standardization of global health education. Finally, it was mainly targeted at medical students, residents, and doctors. It had not granted the demands for global health education of all students majoring in medicine-related studies. Global health education would be a potentially influential tool for achieving health equity, reducing health disparities, and also for future professional careers. It is the time to build and expand education in global health, especially among developing countries. Global health education should be integrated into primary medical education. Interdisciplinary approaches and interprofessional collaboration were recommended. Collaboration and support from developed countries in global health education should be advocated to narrow the gap and to create further mutual benefits.
Gaps in studies of global health education: an empirical literature review
Liu, Yan; Zhang, Ying; Liu, Zhaolan; Wang, JianLi
2015-01-01
Background Global health has stimulated a lot of students and has attracted the interest of many faculties, thereby initiating the establishment of many academic programs on global health research and education. global health education reflects the increasing attention toward social accountability in medical education. Objective This study aims to identify gaps in the studies on global health education. Design A critical literature review of empirical studies was conducted using Boolean search techniques. Results A total of 238 articles, including 16 reviews, were identified. There had been a boom in the numbers of studies on global health education since 2010. Four gaps were summarized. First, 94.6% of all studies on global health education were conducted in North American and European countries, of which 65.6% were carried out in the United States, followed by Canada (14.3%) and the United Kingdom (9.2%). Only seven studies (2.9%) were conducted in Asian countries, five (2.1%) in Oceania, and two (0.8%) in South American/Caribbean countries. A total of 154 studies (64.4%) were qualitative studies and 64 studies (26.8%) were quantitative studies. Second, elective courses and training or programs were the most frequently used approach for global health education. Third, there was a gap in the standardization of global health education. Finally, it was mainly targeted at medical students, residents, and doctors. It had not granted the demands for global health education of all students majoring in medicine-related studies. Conclusions Global health education would be a potentially influential tool for achieving health equity, reducing health disparities, and also for future professional careers. It is the time to build and expand education in global health, especially among developing countries. Global health education should be integrated into primary medical education. Interdisciplinary approaches and interprofessional collaboration were recommended. Collaboration and support from developed countries in global health education should be advocated to narrow the gap and to create further mutual benefits. PMID:25906768
Visual Search in ASD: Instructed Versus Spontaneous Local and Global Processing.
Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2016-09-01
Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual processing in ASD versus TD children. Experiment 1 revealed both groups to be equally sensitive to the absence or presence of a distractor, regardless of the type of target or type of distractor. Experiment 2 revealed a differential effect of task instruction for ASD compared to TD, regardless of the type of target. Taken together, these results stress the importance of task factors in the study of local-global visual processing in ASD.
Visualization of GPM Standard Products at the Precipitation Processing System (PPS)
NASA Astrophysics Data System (ADS)
Kelley, O.
2010-12-01
Many of the standard data products for the Global Precipitation Measurement (GPM) constellation of satellites will be generated at and distributed by the Precipitation Processing System (PPS) at NASA Goddard. PPS will provide several means to visualize these data products. These visualization tools will be used internally by PPS analysts to investigate potential anomalies in the data files, and these tools will also be made available to researchers. Currently, a free data viewer called THOR, the Tool for High-resolution Observation Review, can be downloaded and installed on Linux, Windows, and Mac OS X systems. THOR can display swath and grid products, and to a limited degree, the low-level data packets that the satellite itself transmits to the ground system. Observations collected since the 1997 launch of the Tropical Rainfall Measuring Mission (TRMM) satellite can be downloaded from the PPS FTP archive, and in the future, many of the GPM standard products will also be available from this FTP site. To provide easy access to this 80 terabyte and growing archive, PPS currently operates an on-line ordering tool called STORM that provides geographic and time searches, browse-image display, and the ability to order user-specified subsets of standard data files. Prior to the anticipated 2013 launch of the GPM core satellite, PPS will expand its visualization tools by integrating an on-line version of THOR within STORM to provide on-the-fly image creation of any portion of an archived data file at a user-specified degree of magnification. PPS will also provide OpenDAP access to the data archive and OGC WMS image creation of both swath and gridded data products. During the GPM era, PPS will continue to provide realtime globally-gridded 3-hour rainfall estimates to the public in a compact binary format (3B42RT) and in a GIS format (2-byte TIFF images + ESRI WorldFiles).
Googling DNA sequences on the World Wide Web.
Hajibabaei, Mehrdad; Singer, Gregory A C
2009-11-10
New web-based technologies provide an excellent opportunity for sharing and accessing information and using web as a platform for interaction and collaboration. Although several specialized tools are available for analyzing DNA sequence information, conventional web-based tools have not been utilized for bioinformatics applications. We have developed a novel algorithm and implemented it for searching species-specific genomic sequences, DNA barcodes, by using popular web-based methods such as Google. We developed an alignment independent character based algorithm based on dividing a sequence library (DNA barcodes) and query sequence to words. The actual search is conducted by conventional search tools such as freely available Google Desktop Search. We implemented our algorithm in two exemplar packages. We developed pre and post-processing software to provide customized input and output services, respectively. Our analysis of all publicly available DNA barcode sequences shows a high accuracy as well as rapid results. Our method makes use of conventional web-based technologies for specialized genetic data. It provides a robust and efficient solution for sequence search on the web. The integration of our search method for large-scale sequence libraries such as DNA barcodes provides an excellent web-based tool for accessing this information and linking it to other available categories of information on the web.
[SCREENING OF NUTRITIONAL STATUS AMONG ELDERLY PEOPLE AT FAMILY MEDICINE].
Račić, M; Ivković, N; Kusmuk, S
2015-11-01
The prevalence of malnutrition in elderly is high. Malnutrition or risk of malnutrition can be detected by use of nutritional screening or assessment tools. This systematic review aimed to identify tools that would be reliable, valid, sensitive and specific for nutritional status screening in patients older than 65 at family medicine. The review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. Studies were retrieved using MEDLINE (via Ovid), PubMed and Cochrane Library electronic databases and by manual searching of relevant articles listed in reference list of key publications. The electronic databases were searched using defined key words adapted to each database and using MESH terms. Manual revision of reviews and original articles was performed using Electronic Journals Library. Included studies involved development and validation of screening tools in the community-dwelling elderly population. The tools, subjected to validity and reliability testing for use in the community-dwelling elderly population were Mini Nutritional Assessment (MNA), Mini Nutritional Assessment-Short Form (MNA-SF), Nutrition Screening Initiative (NSI), which includes DETERMINE list, Level I and II Screen, Seniors in the Community: Risk Evaluation for Eating, and Nutrition (SCREEN I and SCREEN II), Subjective Global Assessment (SGA), Nutritional Risk Index (NRI), and Malaysian and South African tool. MNA and MNA-SF appear to have highest reliability and validity for screening of community-dwelling elderly, while the reliability and validity of SCREEN II are good. The authors conclude that whilst several tools have been developed, most have not undergone extensive testing to demonstrate their ability to identify nutritional risk. MNA and MNA-SF have the highest reliability and validity for screening of nutritional status in the community-dwelling elderly, and the reliability and validity of SCREEN II are satisfactory. These instruments also contain all three nutritional status indicators and are practical for use in family medicine. However, the gold standard for screening cannot be set because testing of reliability and continuous validation in the study with a higher level of evidence need to be conducted in family medicine.
... Search Search the NEI Website search NEI on Social Media | Search A-Z | en español | Text size S M ... Outreach Tools and Tips Watch, Listen, and Learn Social Media Glaucoma Glaucoma Learn About Glaucoma Keep Vision in ...
The Front-End to Google for Teachers' Online Searching
ERIC Educational Resources Information Center
Seyedarabi, Faezeh
2006-01-01
This paper reports on an ongoing work in designing and developing a personalised search tool for teachers' online searching using Google search engine (repository) for the implementation and testing of the first research prototype.
Serbruyns, Leen; Leunissen, Inge; Huysmans, Toon; Cuypers, Koen; Meesen, Raf L; van Ruitenbeek, Peter; Sijbers, Jan; Swinnen, Stephan P
2015-04-01
Even though declines in sensorimotor performance during healthy aging have been documented extensively, its underlying neural mechanisms remain unclear. Here, we explored whether age-related subcortical atrophy plays a role in sensorimotor performance declines, and particularly during bimanual manipulative performance (Purdue Pegboard Test). The thalamus, putamen, caudate and pallidum of 91 participants across the adult lifespan (ages 20-79 years) were automatically segmented. In addition to studying age-related changes in the global volume of each subcortical structure, local deformations within these structures, indicative of subregional volume changes, were assessed by means of recently developed shape analyses. Results showed widespread age-related global and subregional atrophy, as well as some notable subregional expansion. Even though global atrophy failed to explain the observed performance declines with aging, shape analyses indicated that atrophy in left and right thalamic subregions, specifically subserving connectivity with the premotor, primary motor and somatosensory cortical areas, mediated the relation between aging and performance decline. It is concluded that subregional volume assessment by means of shape analyses offers a sensitive tool with high anatomical resolution in the search for specific age-related associations between brain structure and behavior. Copyright © 2015 Elsevier Ltd. All rights reserved.
Explore GPM IMERG and Other Global Precipitation Products with GES DISC GIOVANNI
NASA Technical Reports Server (NTRS)
Liu, Zhong; Ostrenga, Dana M.; Vollmer, Bruce; MacRitchie, Kyle; Kempler, Steven
2015-01-01
New features and capabilities in the newly released GIOVANNI allow exploring GPM IMERG (Integrated Multi-satelliE Retrievals for GPM) Early, Late and Final Run global half-hourly and monthly precipitation products as well as other precipitation products distributed by the GES DISC such as TRMM Multi-Satellite Precipitation Analysis (TMPA), MERRA (Modern Era Retrospective-Analysis for Research and Applications), NLDAS (North American Land Data Assimilation Systems), GLDAS (Global Land Data Assimilation Systems), etc. GIOVANNI is a web-based tool developed by the GES DISC (Goddard Earth Sciences and Data Information Services Center) to visualize and analyze Earth science data without having to download data and software. The new interface in GIOVANNI allows searching and filtering precipitation products from different NASA missions and projects and expands the capabilities to inter-compare different precipitation products in one interface. Knowing differences in precipitation products is important to identify issues in retrieval algorithms, biases, uncertainties, etc. Due to different formats, data structures, units and so on, it is not easy to inter-compare precipitation products. Newly added features and capabilities (unit conversion, regridding, etc.) in GIOVANNI make inter-comparisons possible. In this presentation, we will describe these new features and capabilities along with examples.
... with aspiration and injection therapy, there are nevertheless cases in which the ganglion cyst returns. Find an ACFAS Physician Search Search Tools Find an ACFAS Physician: Search by Mail Address ...
Web Search Studies: Multidisciplinary Perspectives on Web Search Engines
NASA Astrophysics Data System (ADS)
Zimmer, Michael
Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.
Karunamoorthi, Kaliyaperumal
2014-06-02
The counterfeiting of anti-malarials represents a form of attack on global public health in which fake and substandard anti-malarials serve as de facto weapons of mass destruction, particularly in resource-constrained endemic settings, where malaria causes nearly 660,000 preventable deaths and threatens millions of lives annually. It has been estimated that fake anti-malarials contribute to nearly 450,000 preventable deaths every year. This crime against humanity is often underestimated or ignored. This study attempts to describe and characterize the direct and indirect effects of counterfeit anti-malarials on public health, clinical care and socio-economic conditions. A search was performed using key databases, WHO documents, and English language search engines. Of 262 potential articles that were identified using a fixed set of criteria, a convenience sample of 105 appropriate articles was selected for this review. Artemisinin-based combination therapy (ACT) is an important tool in the fight against malaria, but a sizable number of patients are unable to afford to this first-line treatment. Consequently, patients tend to procure cheaper anti-malarials, which may be fake or substandard. Forensic palynology reveals that counterfeits originate in Asia. Fragile drug regulations, ineffective law-enforcement agencies and corruption further burden ailing healthcare facilities. Substandard/fake anti-malarials can cause (a) economic sabotage; (b) therapeutic failure; (c) increased risk of the emergence and spread of resistant strains of Plasmodium falciparum and Plasmodium vivax; (d) an undermining of trust/confidence in healthcare stakeholders/systems; and, (e) serious side effects or death. Combating counterfeit anti-malarials is a complex task due to limited resources and poor techniques for the detection and identification of fake anti-malarials. This situation calls for sustainable, global, scientific research and policy change. Further, responsible stakeholders in combination with the synthesis and supply of next generation malaria control tools, such as low-cost anti-malarials, must promote the development of a counterfeit-free and malaria-free future.
2014-01-01
Background The counterfeiting of anti-malarials represents a form of attack on global public health in which fake and substandard anti-malarials serve as de facto weapons of mass destruction, particularly in resource-constrained endemic settings, where malaria causes nearly 660,000 preventable deaths and threatens millions of lives annually. It has been estimated that fake anti-malarials contribute to nearly 450,000 preventable deaths every year. This crime against humanity is often underestimated or ignored. This study attempts to describe and characterize the direct and indirect effects of counterfeit anti-malarials on public health, clinical care and socio-economic conditions. Methods A search was performed using key databases, WHO documents, and English language search engines. Of 262 potential articles that were identified using a fixed set of criteria, a convenience sample of 105 appropriate articles was selected for this review. Results Artemisinin-based combination therapy (ACT) is an important tool in the fight against malaria, but a sizable number of patients are unable to afford to this first-line treatment. Consequently, patients tend to procure cheaper anti-malarials, which may be fake or substandard. Forensic palynology reveals that counterfeits originate in Asia. Fragile drug regulations, ineffective law-enforcement agencies and corruption further burden ailing healthcare facilities. Substandard/fake anti-malarials can cause (a) economic sabotage; (b) therapeutic failure; (c) increased risk of the emergence and spread of resistant strains of Plasmodium falciparum and Plasmodium vivax; (d) an undermining of trust/confidence in healthcare stakeholders/systems; and, (e) serious side effects or death. Conclusion Combating counterfeit anti-malarials is a complex task due to limited resources and poor techniques for the detection and identification of fake anti-malarials. This situation calls for sustainable, global, scientific research and policy change. Further, responsible stakeholders in combination with the synthesis and supply of next generation malaria control tools, such as low-cost anti-malarials, must promote the development of a counterfeit-free and malaria-free future. PMID:24888370
Health information on internet: quality, importance, and popularity of persian health websites.
Samadbeik, Mahnaz; Ahmadi, Maryam; Mohammadi, Ali; Mohseni Saravi, Beniamin
2014-04-01
The Internet has provided great opportunities for disseminating both accurate and inaccurate health information. Therefore, the quality of information is considered as a widespread concern affecting the human life. Despite the increasingly substantial growth in the number of users, Persian health websites and the proportion of internet-using patients, little is known about the quality of Persian medical and health websites. The current study aimed to first assess the quality, popularity and importance of websites providing Persian health-related information, and second to evaluate the correlation of the popularity and importance ranking with quality score on the Internet. The sample websites were identified by entering the health-related keywords into four most popular search engines of Iranian users based on the Alexa ranking at the time of study. Each selected website was assessed using three qualified tools including the Bomba and Land Index, Google PageRank and the Alexa ranking. The evaluated sites characteristics (ownership structure, database, scope and objective) really did not have an effect on the Alexa traffic global rank, Alexa traffic rank in Iran, Google PageRank and Bomba total score. Most websites (78.9 percent, n = 56) were in the moderate category (8 ≤ x ≤ 11.99) based on their quality levels. There was no statistically significant association between Google PageRank with Bomba index variables and Alexa traffic global rank (P > 0.05). The Persian health websites had better Bomba quality scores in availability and usability guidelines as compared to other guidelines. The Google PageRank did not properly reflect the real quality of evaluated websites and Internet users seeking online health information should not merely rely on it for any kind of prejudgment regarding Persian health websites. However, they can use Iran Alexa rank as a primary filtering tool of these websites. Therefore, designing search engines dedicated to explore accredited Persian health-related Web sites can be an effective method to access high-quality Persian health websites.
Eczema, Atopic Dermatitis, or Atopic Eczema: Analysis of Global Search Engine Trends.
Xu, Shuai; Thyssen, Jacob P; Paller, Amy S; Silverberg, Jonathan I
The lack of standardized nomenclature for atopic dermatitis (AD) creates challenges for scientific communication, patient education, and advocacy. We sought to determine the relative popularity of the terms eczema, AD, and atopic eczema (AE) using global search engine volumes. A retrospective analysis of average monthly search volumes from 2014 to 2016 of Google, Bing/Yahoo, and Baidu was performed for eczema, AD, and AE in English and 37 other languages. Google Trends was used to determine the relative search popularity of each term from 2006 to 2016 in English and the top foreign languages, German, Turkish, Russian, and Japanese. Overall, eczema accounted for 1.5 million monthly searches (84%) compared with 247 000 searches for AD (14%) and 44 000 searches for AE (2%). For English language, eczema accounted for 93% of searches compared with 6% for AD and 1% for AE. Search popularity for eczema increased from 2006 to 2016 but remained stable for AD and AE. Given the ambiguity of the term eczema, we recommend the universal use of the next most popular term, AD.
Global polar geospatial information service retrieval based on search engine and ontology reasoning
Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang
2007-01-01
In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.
Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments
Yim, Won Cheol; Cushman, John C.
2017-07-22
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less
Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yim, Won Cheol; Cushman, John C.
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less
Scott, Stephanie; Parkinson, Kathryn; Kaner, Eileen; Robalino, Shannon; Stead, Martine; Power, Christine; Fitzgerald, Niamh; Wrieden, Wendy; Adamson, Ashley
2017-03-03
Excess body weight and heavy alcohol consumption are two of the greatest contributors to global disease. Alcohol use peaks in early adulthood. Alcohol consumption can also exacerbate weight gain. A high body mass index and heavy drinking are independently associated with liver disease but, in combination, they produce an intensified risk of damage, with individuals from lower socio-economic status groups disproportionately affected. We will conduct searches in MEDLINE, Embase, PubMed, PsycINFO, ERIC, ASSIA, Web of Knowledge (WoK), Scopus, CINAHL via EBSCO, LILACS, CENTRAL and ProQuest Dissertations and Theses for studies that assess targeted preventative interventions of any length of time or duration of follow-up that are focused on reducing unhealthy eating behaviour and linked risky alcohol use in 18-25-year-olds. Primary outcomes will be reported changes in: (1) dietary, nutritional or energy intake and (2) alcohol consumption. We will include all randomised controlled trials (RCTs) including cluster RCTs; randomised trials; non-randomised controlled trials; interrupted time series; quasi-experimental; cohort involving concurrent or historical controls and controlled before and after studies. Database searches will be supplemented with searches of Google Scholar, hand searches of key journals and backward and forward citation searches of reference lists of identified papers. Search records will be independently screened by two researchers, with full-text copies of potentially relevant papers retrieved for in-depth review against the inclusion criteria. Methodological quality of RCTs will be evaluated using the Cochrane risk of bias tool. Other study designs will be evaluated using the Cochrane Public Health Review Group's recommended Effective Public Health Practice Project Quality Assessment Tool for Quantitative Studies. Studies will be pooled by meta-analysis and/or narrative synthesis as appropriate for the nature of the data retrieved. It is anticipated that exploration of intervention effectiveness and characteristics (including theory base, behaviour change technique; modality, delivery agent(s) and training of intervention deliverers, including their professional status; and frequency/duration of exposure) will aid subsequent co-design and piloting of a future intervention to help reduce health risk and social inequalities due to excess weight gain and alcohol consumption. PROSPERO CRD42016040128 .
Numerical study on injection parameters optimization of thin wall and biodegradable polymers parts
NASA Astrophysics Data System (ADS)
Santos, C.; Mendes, A.; Carreira, P.; Mateus, A.; Malça, C.
2017-07-01
Nowadays, the molds industry searches new markets, with diversified and added value products. The concept associated to the production of thin walled and biodegradable parts mostly manufactured by injection process has assumed a relevant importance due to environmental and economic factors. The growth of a global consciousness about the harmful effects of the conventional polymers in our life quality associated with the legislation imposed, become key factors for the choice of a particular product by the consumer. The target of this work is to provide an integrated solution for the injection of parts with thin walls and manufactured using biodegradable materials. This integrated solution includes the design and manufacture processes of the mold as well as to find the optimum values for the injection parameters in order to become the process effective and competitive. For this, the Moldflow software was used. It was demonstrated that this computational tool provides an effective responsiveness and it can constitute an important tool in supporting the injection molding of thin-walled and biodegradable parts.
PDB-Explorer: a web-based interactive map of the protein data bank in shape space.
Jin, Xian; Awale, Mahendra; Zasso, Michaël; Kostro, Daniel; Patiny, Luc; Reymond, Jean-Louis
2015-10-23
The RCSB Protein Data Bank (PDB) provides public access to experimentally determined 3D-structures of biological macromolecules (proteins, peptides and nucleic acids). While various tools are available to explore the PDB, options to access the global structural diversity of the entire PDB and to perceive relationships between PDB structures remain very limited. A 136-dimensional atom pair 3D-fingerprint for proteins (3DP) counting categorized atom pairs at increasing through-space distances was designed to represent the molecular shape of PDB-entries. Nearest neighbor searches examples were reported exemplifying the ability of 3DP-similarity to identify closely related biomolecules from small peptides to enzyme and large multiprotein complexes such as virus particles. The principle component analysis was used to obtain the visualization of PDB in 3DP-space. The 3DP property space groups proteins and protein assemblies according to their 3D-shape similarity, yet shows exquisite ability to distinguish between closely related structures. An interactive website called PDB-Explorer is presented featuring a color-coded interactive map of PDB in 3DP-space. Each pixel of the map contains one or more PDB-entries which are directly visualized as ribbon diagrams when the pixel is selected. The PDB-Explorer website allows performing 3DP-nearest neighbor searches of any PDB-entry or of any structure uploaded as protein-type PDB file. All functionalities on the website are implemented in JavaScript in a platform-independent manner and draw data from a server that is updated daily with the latest PDB additions, ensuring complete and up-to-date coverage. The essentially instantaneous 3DP-similarity search with the PDB-Explorer provides results comparable to those of much slower 3D-alignment algorithms, and automatically clusters proteins from the same superfamilies in tight groups. A chemical space classification of PDB based on molecular shape was obtained using a new atom-pair 3D-fingerprint for proteins and implemented in a web-based database exploration tool comprising an interactive color-coded map of the PDB chemical space and a nearest neighbor search tool. The PDB-Explorer website is freely available at www.cheminfo.org/pdbexplorer and represents an unprecedented opportunity to interactively visualize and explore the structural diversity of the PDB. ᅟ
Make Mine a Metasearcher, Please!
ERIC Educational Resources Information Center
Repman, Judi; Carlson, Randal D.
2000-01-01
Describes metasearch tools and explains their value in helping library media centers improve students' Web searches. Discusses Boolean queries and the emphasis on speed at the expense of comprehensiveness; and compares four metasearch tools, including the number of search engines consulted, user control, and databases included. (LRW)
Patent urachus repair - slideshow
... Drugs & Supplements Videos & Tools About MedlinePlus Show Search Search MedlinePlus GO GO About MedlinePlus Site Map FAQs Customer Support Health Topics Drugs & Supplements Videos & Tools Español You Are Here: Home → Medical Encyclopedia → Patent urachus repair - series—Normal anatomy URL of this ...
ERIC Educational Resources Information Center
Rochkind, Jonathan
2007-01-01
The ability to search and receive results in more than one database through a single interface--or metasearch--is something many users want. Google Scholar--the search engine of specifically scholarly content--and library metasearch products like Ex Libris's MetaLib, Serials Solution's Central Search, WebFeat, and products based on MuseGlobal used…
An advanced search engine for patent analytics in medicinal chemistry.
Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnykova, Dina; Lovis, Christian; Ruch, Patrick
2012-01-01
Patent collections contain an important amount of medical-related knowledge, but existing tools were reported to lack of useful functionalities. We present here the development of TWINC, an advanced search engine dedicated to patent retrieval in the domain of health and life sciences. Our tool embeds two search modes: an ad hoc search to retrieve relevant patents given a short query and a related patent search to retrieve similar patents given a patent. Both search modes rely on tuning experiments performed during several patent retrieval competitions. Moreover, TWINC is enhanced with interactive modules, such as chemical query expansion, which is of prior importance to cope with various ways of naming biomedical entities. While the related patent search showed promising performances, the ad-hoc search resulted in fairly contrasted results. Nonetheless, TWINC performed well during the Chemathlon task of the PatOlympics competition and experts appreciated its usability.
Catalog Federation and Interoperability for Geoinformatics
NASA Astrophysics Data System (ADS)
Memon, A.; Lin, K.; Baru, C.
2008-12-01
With the increasing proliferation of online resources in the geosciences, including data, tools, and software services, there is also a proliferation of catalogs containing metadata that describe these resources. To realize the vision articulated in the NSF Workshop on Building a National Geoinformatics System, March 2007-where a user can sit at a terminal and easily search, discover, integrate and use distributed geoscience resources-it will be essential that a search request be able to traverse these multiple metadata catalogs. In this paper, we describe our effort at prototyping catalog interoperability across multiple metadata catalogs. An example of a metadata catalog is the one employed in the GEON Project (www.geongrid.org). The central GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, therefore, can be imbedded in any software application. There has been a requirement from some of the GEON collaborators (for example, at the University of Hyderabad, India and the Navajo Technical College, New Mexico) to deploy their own catalogs, to store information about their resources locally, while they publish some of this information for broader access and use. Thus, a search must now be able to span multiple, independent GEON catalogs. Next, some of our collaborators-e.g. GEO Grid (Global Earth Observations Grid) in Japan-are implementing the Catalog Services for the Web (CS-W) standard for their catalog, thereby requiring the search to span across catalogs implemented using the CS-W standard as well. Finally, we have recently deployed a search service to access all EarthScope data products, which are distributed across organizations in Seattle, WA (IRIS), Boulder, CO (UNAVCO), and Potsdam, Germany (ICDP/GFZ). This service essentially implements a virtual catalog (the actual catalogs and data are stored at the remote locations). So, there is the need to incorporate such 3rd party searches within a broader search function, such as GEONsearch in the GEON Portal. We will discuss technical issues involved in designing and deploying such a multi-catalog search service in GEON.
ERIC Educational Resources Information Center
Tunender, Heather; Ervin, Jane
1998-01-01
Character strings were planted in a World Wide Web site (Project Whistlestop) to test indexing and retrieval rates of five Web search tools (Lycos, infoseek, AltaVista, Yahoo, Excite). It was found that search tools indexed few of the planted character strings, none indexed the META descriptor tag, and only Excite indexed into the 3rd-4th site…
NASA Astrophysics Data System (ADS)
Hoebelheinrich, N. J.; Lynnes, C.; West, P.; Ferritto, M.
2014-12-01
Two problems common to many geoscience domains are the difficulties in finding tools to work with a given dataset collection, and conversely, the difficulties in finding data for a known tool. A collaborative team from the Earth Science Information Partnership (ESIP) has gotten together to design and create a web service, called ToolMatch, to address these problems. The team began their efforts by defining an initial, relatively simple conceptual model that addressed the two uses cases briefly described above. The conceptual model is expressed as an ontology using OWL (Web Ontology Language) and DCterms (Dublin Core Terms), and utilizing standard ontologies such as DOAP (Description of a Project), FOAF (Friend of a Friend), SKOS (Simple Knowledge Organization System) and DCAT (Data Catalog Vocabulary). The ToolMatch service will be taking advantage of various Semantic Web and Web standards, such as OpenSearch, RESTful web services, SWRL (Semantic Web Rule Language) and SPARQL (Simple Protocol and RDF Query Language). The first version of the ToolMatch service was deployed in early fall 2014. While more complete testing is required, a number of communities besides ESIP member organizations have expressed interest in collaborating to create, test and use the service and incorporate it into their own web pages, tools and / or services including the USGS Data Catalog service, DataONE, the Deep Carbon Observatory, Virtual Solar Terrestrial Observatory (VSTO), and the U.S. Global Change Research Program. In this session, presenters will discuss the inception and development of the ToolMatch service, the collaborative process used to design, refine, and test the service, and future plans for the service.
Exploring Gendered Notions: Gender, Job Hunting and Web Searches
NASA Astrophysics Data System (ADS)
Martey, R. M.
Based on analysis of a series of interviews, this chapter suggests that in looking for jobs online, women confront gendered notions of the Internet as well as gendered notions of the jobs themselves. It argues that the social and cultural contexts of both the search tools and the search tasks should be considered in exploring how Web-based technologies serve women in a job search. For these women, the opportunities and limitations of online job-search tools were intimately related to their personal and social needs, especially needs for part-time work, maternity benefits, and career advancement. Although job-seeking services such as Monster.com were used frequently by most of these women, search services did not completely fulfill all their informational needs, and became an — often frustrating — initial starting point for a job search rather than an end-point.
Scope of Policy Issues in eHealth: Results From a Structured Literature Review
Durrani, Hammad; Nayani, Parvez; Fahim, Ammad
2012-01-01
Background eHealth is widely used as a tool for improving health care delivery and information. However, distinct policies and strategies are required for its proper implementation and integration at national and international levels. Objective To determine the scope of policy issues faced by individuals, institutions, or governments in implementing eHealth programs. Methods We conducted a structured review of both peer-reviewed and gray literature from 1998–2008. A Medline search for peer-reviewed articles found 40 papers focusing on different aspects of eHealth policy. In addition, a Google search found 20 national- and international-level policy papers and documents. We reviewed these articles to extract policy issues and solutions described at different levels of care. Results The literature search found 99 policy issues related to eHealth. We grouped these issues under the following themes: (1) networked care, (2) interjurisdictional practice, (3) diffusion of eHealth/digital divide, (4) eHealth integration with existing systems, (5) response to new initiatives, (6) goal-setting for eHealth policy, (7) evaluation and research, (8) investment, and (9) ethics in eHealth. Conclusions We provide a list of policy issues that should be understood and addressed by policy makers at global, jurisdictional, and institutional levels, to facilitate smooth and reliable planning of eHealth programs. PMID:22343270
Gupta, Digant; Vashi, Pankaj G; Lammersfeld, Carolyn A; Braun, Donald P
2011-01-01
Length of stay (LOS) has been used as a surrogate marker for patients' well-being during hospital treatment. We systematically reviewed all pertinent literature on the role of nutritional status in predicting LOS in cancer. A systematic search of human studies published in English was conducted using the MEDLINE data base (all articles published as of December 2010). We searched using the terms 'nutritional status' and 'nutritional assessment' and 'nutritional screening' and 'malnutrition' in combination with the following terms: length of stay, length of hospital stay, duration of stay, and duration of hospitalization together with 'cancer' or 'oncology'. The MEDLINE search identified a total of 149 articles, of which only 21 met the selection criteria. Of the 21 studies, 10 studies investigated gastrointestinal cancer patients, 4 gynecological cancer, and 7 heterogeneous cancer. Eight studies used subjective global assessment (SGA) or patient-generated SGA (PG-SGA), 9 used serum albumin and/or BMI, and 4 used other methods of nutritional assessment. Validated nutritional tools such as SGA/PG-SGA are better predictors of LOS in gastrointestinal cancers requiring surgery than in nonsurgical gastrointestinal cancer patients. Correcting malnutrition may decrease the LOS and perhaps even lower the rate of hospital readmissions in this population. Copyright © 2011 S. Karger AG, Basel.
Graphical Representations of Electronic Search Patterns.
ERIC Educational Resources Information Center
Lin, Xia; And Others
1991-01-01
Discussion of search behavior in electronic environments focuses on the development of GRIP (Graphic Representor of Interaction Patterns), a graphing tool based on HyperCard that produces graphic representations of search patterns. Search state spaces are explained, and forms of data available from electronic searches are described. (34…
Search and dissemination in data processing. [searches performed for Aviation Technology Newsletter
NASA Technical Reports Server (NTRS)
Gold, C. H.; Moore, A. M.; Dodd, B.; Dittmar, V.
1974-01-01
Manual retrieval methods were used to complete 54 searches of interest for the General Aviation Newsletter. Subjects of search ranged from television transmission to machine tooling, Apollo moon landings, electronic equipment, and aerodynamics studies.
Assessing the nutritional status of hospitalized elderly.
Abd Aziz, Nur Adilah Shuhada; Teng, Nur Islami Mohd Fahmi; Abdul Hamid, Mohd Ramadan; Ismail, Nazrul Hadi
2017-01-01
The increasing number of elderly people worldwide throughout the years is concerning due to the health problems often faced by this population. This review aims to summarize the nutritional status among hospitalized elderly and the role of the nutritional assessment tools in this issue. A literature search was performed on six databases using the terms "malnutrition", "hospitalised elderly", "nutritional assessment", "Mini Nutritional Assessment (MNA)", "Geriatric Nutrition Risk Index (GNRI)", and "Subjective Global Assessment (SGA)". According to the previous studies, the prevalence of malnutrition among hospitalized elderly shows an increasing trend not only locally but also across the world. Under-recognition of malnutrition causes the number of malnourished hospitalized elderly to remain high throughout the years. Thus, the development of nutritional screening and assessment tools has been widely studied, and these tools are readily available nowadays. SGA, MNA, and GNRI are the nutritional assessment tools developed specifically for the elderly and are well validated in most countries. However, to date, there is no single tool that can be considered as the universal gold standard for the diagnosis of nutritional status in hospitalized patients. It is important to identify which nutritional assessment tool is suitable to be used in this group to ensure that a structured assessment and documentation of nutritional status can be established. An early and accurate identification of the appropriate treatment of malnutrition can be done as soon as possible, and thus, the malnutrition rate among this group can be minimized in the future.
[Advanced online search techniques and dedicated search engines for physicians].
Nahum, Yoav
2008-02-01
In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines.
SIOExplorer: Modern IT Methods and Tools for Digital Library Management
NASA Astrophysics Data System (ADS)
Sutton, D. W.; Helly, J.; Miller, S.; Chase, A.; Clarck, D.
2003-12-01
With more geoscience disciplines becoming data-driven it is increasingly important to utilize modern techniques for data, information and knowledge management. SIOExplorer is a new digital library project with 2 terabytes of oceanographic data collected over the last 50 years on 700 cruises by the Scripps Institution of Oceanography. It is built using a suite of information technology tools and methods that allow for an efficient and effective digital library management system. The library consists of a number of independent collections, each with corresponding metadata formats. The system architecture allows each collection to be built and uploaded based on a collection dependent metadata template file (MTF). This file is used to create the hierarchical structure of the collection, create metadata tables in a relational database, and to populate object metadata files and the collection as a whole. Collections are comprised of arbitrary digital objects stored at the San Diego Supercomputer Center (SDSC) High Performance Storage System (HPSS) and managed using the Storage Resource Broker (SRB), data handling middle ware developed at SDSC. SIOExplorer interoperates with other collections as a data provider through the Open Archives Initiative (OAI) protocol. The user services for SIOExplorer are accessed from CruiseViewer, a Java application served using Java Web Start from the SIOExplorer home page. CruiseViewer is an advanced tool for data discovery and access. It implements general keyword and interactive geospatial search methods for the collections. It uses a basemap to georeference search results on user selected basemaps such as global topography or crustal age. User services include metadata viewing, opening of selective mime type digital objects (such as images, documents and grid files), and downloading of objects (including the brokering of proprietary hold restrictions).
Water Pollution Search | ECHO | US EPA
The Water Pollution Search within the Water Pollutant Loading Tool gives users options to search for pollutant loading information from Discharge Monitoring Report (DMR) and Toxic Release Inventory (TRI) data.
... Splign Vector Alignment Search Tool (VAST) All Data & Software Resources... Domains & Structures BioSystems Cn3D Conserved Domain Database (CDD) Conserved Domain Search Service (CD Search) Structure (Molecular Modeling Database) Vector Alignment ...
Helping Students Choose Tools To Search the Web.
ERIC Educational Resources Information Center
Cohen, Laura B.; Jacobson, Trudi E.
2000-01-01
Describes areas where faculty members can aid students in making intelligent use of the Web in their research. Differentiates between subject directories and search engines. Describes an engine's three components: spider, index, and search engine. Outlines two misconceptions: that Yahoo! is a search engine and that search engines contain all the…
Dynamic Search and Working Memory in Social Recall
ERIC Educational Resources Information Center
Hills, Thomas T.; Pachur, Thorsten
2012-01-01
What are the mechanisms underlying search in social memory (e.g., remembering the people one knows)? Do the search mechanisms involve dynamic local-to-global transitions similar to semantic search, and are these transitions governed by the general control of attention, associated with working memory span? To find out, we asked participants to…
A Framework and Methodology for Navigating Disaster and Global Health in Crisis Literature
Chan, Jennifer L.; Burkle, Frederick M.
2013-01-01
Both ‘disasters’ and ‘global health in crisis’ research has dramatically grown due to the ever-increasing frequency and magnitude of crises around the world. Large volumes of peer-reviewed literature are not only a testament to the field’s value and evolution, but also present an unprecedented outpouring of seemingly unmanageable information across a wide array of crises and disciplines. Disaster medicine, health and humanitarian assistance, global health and public health disaster literature all lie within the disaster and global health in crisis literature spectrum and are increasingly accepted as multidisciplinary and transdisciplinary disciplines. Researchers, policy makers, and practitioners now face a new challenge; that of accessing this expansive literature for decision-making and exploring new areas of research. Individuals are also reaching beyond the peer-reviewed environment to grey literature using search engines like Google Scholar to access policy documents, consensus reports and conference proceedings. What is needed is a method and mechanism with which to search and retrieve relevant articles from this expansive body of literature. This manuscript presents both a framework and workable process for a diverse group of users to navigate the growing peer-reviewed and grey disaster and global health in crises literature. Methods: Disaster terms from textbooks, peer-reviewed and grey literature were used to design a framework of thematic clusters and subject matter ‘nodes’. A set of 84 terms, selected from 143 curated terms was organized within each node reflecting topics within the disaster and global health in crisis literature. Terms were crossed with one another and the term ‘disaster’. The results were formatted into tables and matrices. This process created a roadmap of search terms that could be applied to the PubMed database. Each search in the matrix or table results in a listed number of articles. This process was applied to literature from PubMed from 2005-2011. A complementary process was also applied to Google Scholar using the same framework of clusters, nodes, and terms expanding the search process to include the broader grey literature assets. Results: A framework of four thematic clusters and twelve subject matter nodes were designed to capture diverse disaster and global health in crisis-related content. From 2005-2011 there were 18,660 articles referring to the term [disaster]. Restricting the search to human research, MeSH, and English language there remained 7,736 identified articles representing an unmanageable number to adequately process for research, policy or best practices. However, using the crossed search and matrix process revealed further examples of robust realms of research in disasters, emergency medicine, EMS, public health and global health. Examples of potential gaps in current peer-reviewed disaster and global health in crisis literature were identified as mental health, elderly care, and alternate sites of care. The same framework and process was then applied to Google Scholar, specifically for topics that resulted in few PubMed search returns. When applying the same framework and process to the Google Scholar example searches retrieved unique peer-reviewed articles not identified in PubMed and documents including books, governmental documents and consensus papers. Conclusions: The proposed framework, methodology and process using four clusters, twelve nodes and a matrix and table process applied to PubMed and Google Scholar unlocks otherwise inaccessible opportunities to better navigate the massively growing body of peer-reviewed disaster and global health in crises literature. This approach will assist researchers, policy makers, and practitioners to generate future research questions, report on the overall evolution of the disaster and global health in crisis field and further guide disaster planning, prevention, preparedness, mitigation response and recovery. PMID:23591457
A framework and methodology for navigating disaster and global health in crisis literature.
Chan, Jennifer L; Burkle, Frederick M
2013-04-04
Both 'disasters' and 'global health in crisis' research has dramatically grown due to the ever-increasing frequency and magnitude of crises around the world. Large volumes of peer-reviewed literature are not only a testament to the field's value and evolution, but also present an unprecedented outpouring of seemingly unmanageable information across a wide array of crises and disciplines. Disaster medicine, health and humanitarian assistance, global health and public health disaster literature all lie within the disaster and global health in crisis literature spectrum and are increasingly accepted as multidisciplinary and transdisciplinary disciplines. Researchers, policy makers, and practitioners now face a new challenge; that of accessing this expansive literature for decision-making and exploring new areas of research. Individuals are also reaching beyond the peer-reviewed environment to grey literature using search engines like Google Scholar to access policy documents, consensus reports and conference proceedings. What is needed is a method and mechanism with which to search and retrieve relevant articles from this expansive body of literature. This manuscript presents both a framework and workable process for a diverse group of users to navigate the growing peer-reviewed and grey disaster and global health in crises literature. Disaster terms from textbooks, peer-reviewed and grey literature were used to design a framework of thematic clusters and subject matter 'nodes'. A set of 84 terms, selected from 143 curated terms was organized within each node reflecting topics within the disaster and global health in crisis literature. Terms were crossed with one another and the term 'disaster'. The results were formatted into tables and matrices. This process created a roadmap of search terms that could be applied to the PubMed database. Each search in the matrix or table results in a listed number of articles. This process was applied to literature from PubMed from 2005-2011. A complementary process was also applied to Google Scholar using the same framework of clusters, nodes, and terms expanding the search process to include the broader grey literature assets. A framework of four thematic clusters and twelve subject matter nodes were designed to capture diverse disaster and global health in crisis-related content. From 2005-2011 there were 18,660 articles referring to the term [disaster]. Restricting the search to human research, MeSH, and English language there remained 7,736 identified articles representing an unmanageable number to adequately process for research, policy or best practices. However, using the crossed search and matrix process revealed further examples of robust realms of research in disasters, emergency medicine, EMS, public health and global health. Examples of potential gaps in current peer-reviewed disaster and global health in crisis literature were identified as mental health, elderly care, and alternate sites of care. The same framework and process was then applied to Google Scholar, specifically for topics that resulted in few PubMed search returns. When applying the same framework and process to the Google Scholar example searches retrieved unique peer-reviewed articles not identified in PubMed and documents including books, governmental documents and consensus papers. The proposed framework, methodology and process using four clusters, twelve nodes and a matrix and table process applied to PubMed and Google Scholar unlocks otherwise inaccessible opportunities to better navigate the massively growing body of peer-reviewed disaster and global health in crises literature. This approach will assist researchers, policy makers, and practitioners to generate future research questions, report on the overall evolution of the disaster and global health in crisis field and further guide disaster planning, prevention, preparedness, mitigation response and recovery.
Subject Specific Databases: A Powerful Research Tool
ERIC Educational Resources Information Center
Young, Terrence E., Jr.
2004-01-01
Subject specific databases, or vortals (vertical portals), are databases that provide highly detailed research information on a particular topic. They are the smallest, most focused search tools on the Internet and, in recent years, they've been on the rise. Currently, more of the so-called "mainstream" search engines, subject directories, and…
Couvin, David; Zozio, Thierry; Rastogi, Nalin
2017-07-01
Spoligotyping is one of the most commonly used polymerase chain reaction (PCR)-based methods for identification and study of genetic diversity of Mycobacterium tuberculosis complex (MTBC). Despite its known limitations if used alone, the methodology is particularly useful when used in combination with other methods such as mycobacterial interspersed repetitive units - variable number of tandem DNA repeats (MIRU-VNTRs). At a worldwide scale, spoligotyping has allowed identification of information on 103,856 MTBC isolates (corresponding to 98049 clustered strains plus 5807 unique isolates from 169 countries of patient origin) contained within the SITVIT2 proprietary database of the Institut Pasteur de la Guadeloupe. The SpolSimilaritySearch web-tool described herein (available at: http://www.pasteur-guadeloupe.fr:8081/SpolSimilaritySearch) incorporates a similarity search algorithm allowing users to get a complete overview of similar spoligotype patterns (with information on presence or absence of 43 spacers) in the aforementioned worldwide database. This tool allows one to analyze spread and evolutionary patterns of MTBC by comparing similar spoligotype patterns, to distinguish between widespread, specific and/or confined patterns, as well as to pinpoint patterns with large deleted blocks, which play an intriguing role in the genetic epidemiology of M. tuberculosis. Finally, the SpolSimilaritySearch tool also provides with the country distribution patterns for each queried spoligotype. Copyright © 2017 Elsevier Ltd. All rights reserved.
What Will We Actually Do On the Moon?
NASA Astrophysics Data System (ADS)
Sherwood, Brent
2007-01-01
Descriptions are provided for eleven specific, representative lunar activity scenarios selected from among hundreds that arose in 2006 from the NASA-sponsored development of a "global lunar strategy." The scenarios are: pave for dust control; establish a colony of continuously active robots; kitchen science; designer biology; tend the machinery; search for pieces of ancient Earth; build simple observatories that open new wavelength regimes; establish a virtual real-time network to enable public engagement; institute a public-private lunar development corporation; rehearse planetary protection protocols for Mars; and expand life and intelligence beyond Earth through settlement of the Moon. Evocative scenarios such as these are proposed as a communications tool to help win public understanding and support of the Vision for Space Exploration.
Maleki, Ehsan; Babashah, Hossein; Koohi, Somayyeh; Kavehvash, Zahra
2017-07-01
This paper presents an optical processing approach for exploring a large number of genome sequences. Specifically, we propose an optical correlator for global alignment and an extended moiré matching technique for local analysis of spatially coded DNA, whose output is fed to a novel three-dimensional artificial neural network for local DNA alignment. All-optical implementation of the proposed 3D artificial neural network is developed and its accuracy is verified in Zemax. Thanks to its parallel processing capability, the proposed structure performs local alignment of 4 million sequences of 150 base pairs in a few seconds, which is much faster than its electrical counterparts, such as the basic local alignment search tool.
Custom Search Engines: Tools & Tips
ERIC Educational Resources Information Center
Notess, Greg R.
2008-01-01
Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…
The Material-Independent Signatures of Life.Forensic Tools of Astrobiology
NASA Astrophysics Data System (ADS)
Radu, Popa
Biological life is intimately related to the geochemical conditions on Earth and is fit for this planet's energy flux. It has often been suggested that life was also built in accordance with the particular local conditions offered by the early Earth. Common sense dictates that the constructive details of life on another planet should also be a reflection of the particular local conditions. Moreover, the collective activity of all life forms on a planet should have some measurable consequences on the global geochemistry. Comparison with the Earth-bound type of life is certainly inspirational but only up to a point. One central rule in astrobiology is: life can be made of many things and can have many forms. The search for extraterrestrial life cannot be limited to the search for Earth-like examples. Despite the common sense of this guideline, a manifest tendency exists today to judge the geochemical conditions from other planets through Earth-colored glasses. Much too often we hear expressions such as conditions too hostile to harbor life', or the search for Earth-like planets as potential hosts of life', or chemistry appropriate for life', or water as the fluid of life', or terra-formation of another planet to make it appropriate for life'. Irrespectively of how hostile another planet might appear to our Earth-based metabolism, we cannot state with certainty that life cannot be present before a comprehensive investigation is performed which includes the search for life's material-independent signatures.
Villafañe, Jorge Hugo; Cantero-Tellez, Raquel; Valdes, Kristin; Usuelli, Federico Giuseppe; Berjano, Pedro
2017-09-01
Conservative treatments are commonly performed therapeutic interventions for the management of carpometacarpal (CMC) joint osteoarthritis (OA). Physical and occupational therapies are starting to use video-based online content as both a patient teaching tool and a source for treatment techniques. YouTube is a popular video-sharing website that can be accessed easily. The purpose of this study was to analyze the quality of content and potential sources of bias in videos available on YouTube pertaining to thumb exercises for CMC OA. The YouTube video database was systematically searched using the search term thumb osteoarthritis and exercises from its inception to March 10, 2017. Authors independently selected videos, conducted quality assessment, and extracted results. A total of 832 videos were found using the keywords. Of these, 10 videos clearly demonstrated therapeutic exercise for the management of CMC OA. In addition, the top-ranked video found by performing a search of "views" was a video with more than 121 863 views uploaded in 2015 that lasted 12.33 minutes and scored only 2 points on the Global Score for Educational Value rating scale. Most of the videos viewed that described conservative interventions for CMC OA management have a low level of evidence to support their use. Although patients and novice hand therapists are using YouTube and other online resources, videos that are produced by expert hand therapists are scarce.
Automated parameterization of intermolecular pair potentials using global optimization techniques
NASA Astrophysics Data System (ADS)
Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk
2014-12-01
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
Biron, P; Metzger, M H; Pezet, C; Sebban, C; Barthuet, E; Durand, T
2014-01-01
A full-text search tool was introduced into the daily practice of Léon Bérard Center (France), a health care facility devoted to treatment of cancer. This tool was integrated into the hospital information system by the IT department having been granted full autonomy to improve the system. To describe the development and various uses of a tool for full-text search of computerized patient records. The technology is based on Solr, an open-source search engine. It is a web-based application that processes HTTP requests and returns HTTP responses. A data processing pipeline that retrieves data from different repositories, normalizes, cleans and publishes it to Solr, was integrated in the information system of the Leon Bérard center. The IT department developed also user interfaces to allow users to access the search engine within the computerized medical record of the patient. From January to May 2013, 500 queries were launched per month by an average of 140 different users. Several usages of the tool were described, as follows: medical management of patients, medical research, and improving the traceability of medical care in medical records. The sensitivity of the tool for detecting the medical records of patients diagnosed with both breast cancer and diabetes was 83.0%, and its positive predictive value was 48.7% (gold standard: manual screening by a clinical research assistant). The project demonstrates that the introduction of full-text-search tools allowed practitioners to use unstructured medical information for various purposes.
In Silico PCR Tools for a Fast Primer, Probe, and Advanced Searching.
Kalendar, Ruslan; Muterko, Alexandr; Shamekova, Malika; Zhambakin, Kabyl
2017-01-01
The polymerase chain reaction (PCR) is fundamental to molecular biology and is the most important practical molecular technique for the research laboratory. The principle of this technique has been further used and applied in plenty of other simple or complex nucleic acid amplification technologies (NAAT). In parallel to laboratory "wet bench" experiments for nucleic acid amplification technologies, in silico or virtual (bioinformatics) approaches have been developed, among which in silico PCR analysis. In silico NAAT analysis is a useful and efficient complementary method to ensure the specificity of primers or probes for an extensive range of PCR applications from homology gene discovery, molecular diagnosis, DNA fingerprinting, and repeat searching. Predicting sensitivity and specificity of primers and probes requires a search to determine whether they match a database with an optimal number of mismatches, similarity, and stability. In the development of in silico bioinformatics tools for nucleic acid amplification technologies, the prospects for the development of new NAAT or similar approaches should be taken into account, including forward-looking and comprehensive analysis that is not limited to only one PCR technique variant. The software FastPCR and the online Java web tool are integrated tools for in silico PCR of linear and circular DNA, multiple primer or probe searches in large or small databases and for advanced search. These tools are suitable for processing of batch files that are essential for automation when working with large amounts of data. The FastPCR software is available for download at http://primerdigital.com/fastpcr.html and the online Java version at http://primerdigital.com/tools/pcr.html .
Expedite random structure searching using objects from Wyckoff positions
NASA Astrophysics Data System (ADS)
Wang, Shu-Wei; Hsing, Cheng-Rong; Wei, Ching-Ming
2018-02-01
Random structure searching has been proved to be a powerful approach to search and find the global minimum and the metastable structures. A true random sampling is in principle needed yet it would be highly time-consuming and/or practically impossible to find the global minimum for the complicated systems in their high-dimensional configuration space. Thus the implementations of reasonable constraints, such as adopting system symmetries to reduce the independent dimension in structural space and/or imposing chemical information to reach and relax into low-energy regions, are the most essential issues in the approach. In this paper, we propose the concept of "object" which is either an atom or composed of a set of atoms (such as molecules or carbonates) carrying a symmetry defined by one of the Wyckoff positions of space group and through this process it allows the searching of global minimum for a complicated system to be confined in a greatly reduced structural space and becomes accessible in practice. We examined several representative materials, including Cd3As2 crystal, solid methanol, high-pressure carbonates (FeCO3), and Si(111)-7 × 7 reconstructed surface, to demonstrate the power and the advantages of using "object" concept in random structure searching.
A new effective operator for the hybrid algorithm for solving global optimisation problems
NASA Astrophysics Data System (ADS)
Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac
2018-04-01
Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.
National Center for Biotechnology Information
... Splign Vector Alignment Search Tool (VAST) All Data & Software Resources... Domains & Structures BioSystems Cn3D Conserved Domain Database (CDD) Conserved Domain Search Service (CD Search) Structure (Molecular Modeling Database) Vector Alignment ...
Hartzell, S.; Liu, P.
1996-01-01
A method is presented for the simultaneous calculation of slip amplitudes and rupture times for a finite fault using a hybrid global search algorithm. The method we use combines simulated annealing with the downhill simplex method to produce a more efficient search algorithm then either of the two constituent parts. This formulation has advantages over traditional iterative or linearized approaches to the problem because it is able to escape local minima in its search through model space for the global optimum. We apply this global search method to the calculation of the rupture history for the Landers, California, earthquake. The rupture is modeled using three separate finite-fault planes to represent the three main fault segments that failed during this earthquake. Both the slip amplitude and the time of slip are calculated for a grid work of subfaults. The data used consist of digital, teleseismic P and SH body waves. Long-period, broadband, and short-period records are utilized to obtain a wideband characterization of the source. The results of the global search inversion are compared with a more traditional linear-least-squares inversion for only slip amplitudes. We use a multi-time-window linear analysis to relax the constraints on rupture time and rise time in the least-squares inversion. Both inversions produce similar slip distributions, although the linear-least-squares solution has a 10% larger moment (7.3 ?? 1026 dyne-cm compared with 6.6 ?? 1026 dyne-cm). Both inversions fit the data equally well and point out the importance of (1) using a parameterization with sufficient spatial and temporal flexibility to encompass likely complexities in the rupture process, (2) including suitable physically based constraints on the inversion to reduce instabilities in the solution, and (3) focusing on those robust rupture characteristics that rise above the details of the parameterization and data set.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-30
... Limited Shijiazhuang Global New Century Tools Co., Ltd. Sichuan Huili Tools Co. Task Tools & Abrasives... Global Logistics (Shanghai) Co., Ltd. APS Qingdao Cangshan Qingshui Vegetable Foods Co., Ltd. Chengwu...
Helping the Warfighter Become Green! (Briefing Charts)
2011-02-01
CHAIN LEADERSHIPARFIGHTER-FOCUSED, GLOBALLY RESPONSIVE, FISCALLY RESPONSIBLE SUPPLY CHAIN LEADERSHIP DOD EMALL DOD’s Online Shopping Tool • Web self...LEADERSHIP DOD EMALL DOD’s Online Shopping Tool WARFIGHTER FOCUSED, GLOBALLY RESPONSIVE SUPPLY CHAIN LEADERSHIPARFIGHTER FOCUSED, GLOBALLY RESPONSIVE
Alternative Fuels Data Center: Vehicle Search
Tools » Vehicle Search Printable Version Share this resource Send a link to Alternative Fuels Data Center: Vehicle Search to someone by E-mail Share Alternative Fuels Data Center: Vehicle Search on Facebook Tweet about Alternative Fuels Data Center: Vehicle Search on Twitter Bookmark Alternative Fuels
Scripting for Collaborative Search Computer-Supported Classroom Activities
ERIC Educational Resources Information Center
Verdugo, Renato; Barros, Leonardo; Albornoz, Daniela; Nussbaum, Miguel; McFarlane, Angela
2014-01-01
Searching online is one of the most powerful resources today's students have for accessing information. Searching in groups is a daily practice across multiple contexts; however, the tools we use for searching online do not enable collaborative practices and traditional search models consider a single user navigating online in solitary. This paper…
Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter
ERIC Educational Resources Information Center
Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory
2010-01-01
Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…
Research of converter transformer fault diagnosis based on improved PSO-BP algorithm
NASA Astrophysics Data System (ADS)
Long, Qi; Guo, Shuyong; Li, Qing; Sun, Yong; Li, Yi; Fan, Youping
2017-09-01
To overcome those disadvantages that BP (Back Propagation) neural network and conventional Particle Swarm Optimization (PSO) converge at the global best particle repeatedly in early stage and is easy trapped in local optima and with low diagnosis accuracy when being applied in converter transformer fault diagnosis, we come up with the improved PSO-BP neural network to improve the accuracy rate. This algorithm improves the inertia weight Equation by using the attenuation strategy based on concave function to avoid the premature convergence of PSO algorithm and Time-Varying Acceleration Coefficient (TVAC) strategy was adopted to balance the local search and global search ability. At last the simulation results prove that the proposed approach has a better ability in optimizing BP neural network in terms of network output error, global searching performance and diagnosis accuracy.
Mining Hidden Gems Beneath the Surface: A Look At the Invisible Web.
ERIC Educational Resources Information Center
Carlson, Randal D.; Repman, Judi
2002-01-01
Describes resources for researchers called the Invisible Web that are hidden from the usual search engines and other tools and contrasts them with those resources available on the surface Web. Identifies specialized search tools, databases, and strategies that can be used to locate credible in-depth information. (Author/LRW)
Considerations in the Choice of an Internet Search Tool.
ERIC Educational Resources Information Center
Vaughan, Jason
1999-01-01
Describes a survey conducted among library school graduate students and librarians at the University of North Carolina at Chapel Hill that investigated factors that play a role in information professionals' choice of Internet search tools. Utility functions and ease of use are discussed and the original online survey is appended. (Author/LRW)
Basic Reference Tools for Nursing Research. A Workbook with Explanations and Examples.
ERIC Educational Resources Information Center
Smalley, Topsy N.
This workbook is designed to introduce nursing students to basic concepts and skills needed for searching the literatures of medicine, nursing, and allied health areas for materials relevant to specific information needs. The workbook introduces the following research tools: (1) the National Library of Medicine's MEDLINE searches, including a…
Tools to Ease Your Internet Adventures: Part I.
ERIC Educational Resources Information Center
Descy, Don E.
1993-01-01
This first of a two-part series highlights three tools that improve accessibility to Internet resources: (1) Alex, a database that accesses files in FTP (file transfer protocol) sites; (2) Archie, software that searches for file names with a user's search term; and (3) Gopher, a menu-driven program to access Internet sites. (LRW)
Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon
2014-01-01
Approaches used in Earth science research such as case study analysis and climatology studies involve discovering and gathering diverse data sets and information to support the research goals. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. In cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. This paper presents a specialized search, aggregation and curation tool for Earth science to address these challenges. The search rool automatically creates curated 'Data Albums', aggregated collections of information related to a specific event, containing links to relevant data files [granules] from different instruments, tools and services for visualization and analysis, and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non relevant information and data.
Mechanisms of Age-Related Decline in Memory Search Across the Adult Life Span
Hills, Thomas T.; Mata, Rui; Wilke, Andreas; Samanez-Larkin, Gregory R.
2013-01-01
Three alternative mechanisms for age-related decline in memory search have been proposed, which result from either reduced processing speed (global slowing hypothesis), overpersistence on categories (cluster-switching hypothesis), or the inability to maintain focus on local cues related to a decline in working memory (cue-maintenance hypothesis). We investigated these 3 hypotheses by formally modeling the semantic recall patterns of 185 adults between 27 to 99 years of age in the animal fluency task (Thurstone, 1938). The results indicate that people switch between global frequency-based retrieval cues and local item-based retrieval cues to navigate their semantic memory. Contrary to the global slowing hypothesis that predicts no qualitative differences in dynamic search processes and the cluster-switching hypothesis that predicts reduced switching between retrieval cues, the results indicate that as people age, they tend to switch more often between local and global cues per item recalled, supporting the cue-maintenance hypothesis. Additional support for the cue-maintenance hypothesis is provided by a negative correlation between switching and digit span scores and between switching and total items recalled, which suggests that cognitive control may be involved in cue maintenance and the effective search of memory. Overall, the results are consistent with age-related decline in memory search being a consequence of reduced cognitive control, consistent with models suggesting that working memory is related to goal perseveration and the ability to inhibit distracting information. PMID:23586941
NASA Astrophysics Data System (ADS)
Aungkulanon, P.; Luangpaiboon, P.
2010-10-01
Nowadays, the engineering problem systems are large and complicated. An effective finite sequence of instructions for solving these problems can be categorised into optimisation and meta-heuristic algorithms. Though the best decision variable levels from some sets of available alternatives cannot be done, meta-heuristics is an alternative for experience-based techniques that rapidly help in problem solving, learning and discovery in the hope of obtaining a more efficient or more robust procedure. All meta-heuristics provide auxiliary procedures in terms of their own tooled box functions. It has been shown that the effectiveness of all meta-heuristics depends almost exclusively on these auxiliary functions. In fact, the auxiliary procedure from one can be implemented into other meta-heuristics. Well-known meta-heuristics of harmony search (HSA) and shuffled frog-leaping algorithms (SFLA) are compared with their hybridisations. HSA is used to produce a near optimal solution under a consideration of the perfect state of harmony of the improvisation process of musicians. A meta-heuristic of the SFLA, based on a population, is a cooperative search metaphor inspired by natural memetics. It includes elements of local search and global information exchange. This study presents solution procedures via constrained and unconstrained problems with different natures of single and multi peak surfaces including a curved ridge surface. Both meta-heuristics are modified via variable neighbourhood search method (VNSM) philosophy including a modified simplex method (MSM). The basic idea is the change of neighbourhoods during searching for a better solution. The hybridisations proceed by a descent method to a local minimum exploring then, systematically or at random, increasingly distant neighbourhoods of this local solution. The results show that the variant of HSA with VNSM and MSM seems to be better in terms of the mean and variance of design points and yields.
An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids
NASA Astrophysics Data System (ADS)
Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun
2017-07-01
Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.
PubMed and beyond: a survey of web tools for searching biomedical literature
Lu, Zhiyong
2011-01-01
The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search PMID:21245076
Dunne, Suzanne; Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter
2013-08-27
The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion.
A review on the prevalence and measurement of elder abuse in the community.
Sooryanarayana, Rajini; Choo, Wan-Yuen; Hairi, Noran N
2013-10-01
Aging is a rising phenomenon globally and elder abuse is becoming increasingly recognized as a health and social problem. This review aimed to identify the prevalence of elder abuse in community settings, and discuss issues regarding measurement tools and strategies to measure elderly abuse by systematically reviewing all community-based studies conducted worldwide. Articles on elder abuse from 1990 to 2011 were reviewed. A total of 1,832 articles referring to elders residing at home either in their own or at relatives' houses were searched via CINAHL and MEDLINE electronic databases, in addition to a hand search of the latest articles in geriatric textbooks and screening references, choosing a total of 26 articles for review. Highest prevalence was reported in developed countries, with Spain having 44.6% overall prevalence of suspicion of abuse and developing countries exhibiting lower estimates, from 13.5% to 28.8%. Physical abuse was among the least encountered, with psychological abuse and financial exploitation being the most common types of maltreatment reported. To date, there is no single gold standard test to ascertain abuse, with numerous tools and different methods employed in various studies, coupled with varying definitions of thresholds for age. Current evidences show that elder abuse is a common problem in both developed and developing countries. It is important that social, health care, and legal systems take these findings into consideration in screening for abuse or reforming existing services to protect the health and welfare of the elderly.
NASA Astrophysics Data System (ADS)
Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.
2011-06-01
Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.
Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.
2011-01-01
Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.
Short-term Internet search using makes people rely on search engines when facing unknown issues.
Wang, Yifan; Wu, Lingdan; Luo, Liang; Zhang, Yifen; Dong, Guangheng
2017-01-01
The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day's training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day's Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines.
Short-term Internet search using makes people rely on search engines when facing unknown issues
Wang, Yifan; Wu, Lingdan; Luo, Liang; Zhang, Yifen
2017-01-01
The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day’s training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day’s Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines. PMID:28441408
Desai, Sunita; Hatfield, Laura A; Hicks, Andrew L; Sinaiko, Anna D; Chernew, Michael E; Cowling, David; Gautam, Santosh; Wu, Sze-Jung; Mehrotra, Ateev
2017-08-01
Insurers, employers, and states increasingly encourage price transparency so that patients can compare health care prices across providers. However, the evidence on whether price transparency tools encourage patients to receive lower-cost care and reduce overall spending remains limited and mixed. We examined the experience of a large insured population that was offered a price transparency tool, focusing on a set of "shoppable" services (lab tests, office visits, and advanced imaging services). Overall, offering the tool was not associated with lower shoppable services spending. Only 12 percent of employees who were offered the tool used it in the first fifteen months after it was introduced, and use of the tool was not associated with lower prices for lab tests or office visits. The average price paid for imaging services preceded by a price search was 14 percent lower than that paid for imaging services not preceded by a price search. However, only 1 percent of those who received advanced imaging conducted a price search. Simply offering a price transparency tool is not sufficient to meaningfully decrease health care prices or spending. Project HOPE—The People-to-People Health Foundation, Inc.
Global Emergency Medicine: A review of the literature from 2017.
Becker, Torben K; Trehan, Indi; Hayward, Alison Schroth; Hexom, Braden J; Kivlehan, Sean M; Lunney, Kevin M; Modi, Payal; Osei-Ampofo, Maxwell; Pousson, Amelia; Cho, Daniel K; Levine, Adam C
2018-05-23
The Global Emergency Medicine Literature Review (GEMLR) conducts an annual search of peer-reviewed and gray literature relevant to global emergency medicine (EM) to identify, review, and disseminate the most important new research in this field to a global audience of academics and clinical practitioners. This year, 17,722 articles written in three languages were identified by our electronic search. These articles were distributed among 20 reviewers for initial screening based on their relevance to the field of global EM. Another two reviewers searched the gray literature, yielding an additional 11 articles. All articles that were deemed appropriate by at least one reviewer and approved by their editor underwent formal scoring of overall quality and importance. Two independent reviewers scored all articles. A total of 848 articles met our inclusion criteria and underwent full review. 63% were categorized as emergency care in resource-limited settings, 23% as disaster and humanitarian response, and 14% as emergency medicine development. 21 articles received scores of 18.5 or higher out of a maximum score 20 and were selected for formal summary and critique. Inter-rater reliability testing between reviewers revealed a Cohen's Kappa of 0.344. In 2017, the total number of articles identified by our search continued to increase. Studies and reviews with a focus on infectious diseases, pediatrics, and trauma represented the majority of top-scoring articles. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
MyGeoHub: A Collaborative Geospatial Research and Education Platform
NASA Astrophysics Data System (ADS)
Kalyanam, R.; Zhao, L.; Biehl, L. L.; Song, C. X.; Merwade, V.; Villoria, N.
2017-12-01
Scientific research is increasingly collaborative and globally distributed; research groups now rely on web-based scientific tools and data management systems to simplify their day-to-day collaborative workflows. However, such tools often lack seamless interfaces, requiring researchers to contend with manual data transfers, annotation and sharing. MyGeoHub is a web platform that supports out-of-the-box, seamless workflows involving data ingestion, metadata extraction, analysis, sharing and publication. MyGeoHub is built on the HUBzero cyberinfrastructure platform and adds general-purpose software building blocks (GABBs), for geospatial data management, visualization and analysis. A data management building block iData, processes geospatial files, extracting metadata for keyword and map-based search while enabling quick previews. iData is pervasive, allowing access through a web interface, scientific tools on MyGeoHub or even mobile field devices via a data service API. GABBs includes a Python map library as well as map widgets that in a few lines of code, generate complete geospatial visualization web interfaces for scientific tools. GABBs also includes powerful tools that can be used with no programming effort. The GeoBuilder tool provides an intuitive wizard for importing multi-variable, geo-located time series data (typical of sensor readings, GPS trackers) to build visualizations supporting data filtering and plotting. MyGeoHub has been used in tutorials at scientific conferences and educational activities for K-12 students. MyGeoHub is also constantly evolving; the recent addition of Jupyter and R Shiny notebook environments enable reproducible, richly interactive geospatial analyses and applications ranging from simple pre-processing to published tools. MyGeoHub is not a monolithic geospatial science gateway, instead it supports diverse needs ranging from just a feature-rich data management system, to complex scientific tools and workflows.
Ocean Drilling Program: Science Operator Search Engine
and products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main -USIO site, plus IODP, ODP, and DSDP Publications, together or separately. ODP | Search | Database
The role of indigenous health workers in promoting oral health during pregnancy: a scoping review.
Villarosa, Ariana C; Villarosa, Amy R; Salamonson, Yenna; Ramjan, Lucie M; Sousa, Mariana S; Srinivas, Ravi; Jones, Nathan; George, Ajesh
2018-03-20
Early childhood caries is the most common chronic childhood disease worldwide. Australian Aboriginal and Torres Strait Islander children are twice more likely to develop dental decay, and contributing factors include poor maternal oral health and underutilisation of dental services. Globally, Indigenous health workers are in a unique position to deliver culturally competent oral healthcare because they have a contextual understanding of the needs of the community. This scoping review aimed to identify the role of Indigenous health workers in promoting maternal oral health globally. A systematic search was undertaken of six electronic databases for relevant published literature and grey literature, and expanded to include non-dental health professionals and other Indigenous populations across the lifespan when limited studies were identified. Twenty-two papers met the inclusion criteria, focussing on the role of Indigenous health workers in maternal oral healthcare, types of oral health training programs and screening tools to evaluate program effectiveness. There was a paucity of peer-reviewed evidence on the role of Indigenous health workers in promoting maternal oral health, with most studies focusing on other non-dental health professionals. Nevertheless, there were reports of Indigenous health workers supporting oral health in early childhood. Although some oral health screening tools and training programs were identified for non-dental health professionals during the antenatal period, no specific screening tool has been developed for use by Indigenous health workers. While the role of health workers from Indigenous communities in promoting maternal oral health is yet to be clearly defined, they have the potential to play a crucial role in 'driving' screening and education of maternal oral health especially when there is adequate organisational support, warranting further research.
Rare disease diagnosis: A review of web search, social media and large-scale data-mining approaches.
Svenstrup, Dan; Jørgensen, Henrik L; Winther, Ole
2015-01-01
Physicians and the general public are increasingly using web-based tools to find answers to medical questions. The field of rare diseases is especially challenging and important as shown by the long delay and many mistakes associated with diagnoses. In this paper we review recent initiatives on the use of web search, social media and data mining in data repositories for medical diagnosis. We compare the retrieval accuracy on 56 rare disease cases with known diagnosis for the web search tools google.com, pubmed.gov, omim.org and our own search tool findzebra.com. We give a detailed description of IBM's Watson system and make a rough comparison between findzebra.com and Watson on subsets of the Doctor's dilemma dataset. The recall@10 and recall@20 (fraction of cases where the correct result appears in top 10 and top 20) for the 56 cases are found to be be 29%, 16%, 27% and 59% and 32%, 18%, 34% and 64%, respectively. Thus, FindZebra has a significantly (p < 0.01) higher recall than the other 3 search engines. When tested under the same conditions, Watson and FindZebra showed similar recall@10 accuracy. However, the tests were performed on different subsets of Doctors dilemma questions. Advances in technology and access to high quality data have opened new possibilities for aiding the diagnostic process. Specialized search engines, data mining tools and social media are some of the areas that hold promise.
Rare disease diagnosis: A review of web search, social media and large-scale data-mining approaches
Svenstrup, Dan; Jørgensen, Henrik L; Winther, Ole
2015-01-01
Physicians and the general public are increasingly using web-based tools to find answers to medical questions. The field of rare diseases is especially challenging and important as shown by the long delay and many mistakes associated with diagnoses. In this paper we review recent initiatives on the use of web search, social media and data mining in data repositories for medical diagnosis. We compare the retrieval accuracy on 56 rare disease cases with known diagnosis for the web search tools google.com, pubmed.gov, omim.org and our own search tool findzebra.com. We give a detailed description of IBM's Watson system and make a rough comparison between findzebra.com and Watson on subsets of the Doctor's dilemma dataset. The recall@10 and recall@20 (fraction of cases where the correct result appears in top 10 and top 20) for the 56 cases are found to be be 29%, 16%, 27% and 59% and 32%, 18%, 34% and 64%, respectively. Thus, FindZebra has a significantly (p < 0.01) higher recall than the other 3 search engines. When tested under the same conditions, Watson and FindZebra showed similar recall@10 accuracy. However, the tests were performed on different subsets of Doctors dilemma questions. Advances in technology and access to high quality data have opened new possibilities for aiding the diagnostic process. Specialized search engines, data mining tools and social media are some of the areas that hold promise. PMID:26442199
Cho, Jin-Young; Lee, Hyoung-Joo; Jeong, Seul-Ki; Paik, Young-Ki
2017-12-01
Mass spectrometry (MS) is a widely used proteome analysis tool for biomedical science. In an MS-based bottom-up proteomic approach to protein identification, sequence database (DB) searching has been routinely used because of its simplicity and convenience. However, searching a sequence DB with multiple variable modification options can increase processing time, false-positive errors in large and complicated MS data sets. Spectral library searching is an alternative solution, avoiding the limitations of sequence DB searching and allowing the detection of more peptides with high sensitivity. Unfortunately, this technique has less proteome coverage, resulting in limitations in the detection of novel and whole peptide sequences in biological samples. To solve these problems, we previously developed the "Combo-Spec Search" method, which uses manually multiple references and simulated spectral library searching to analyze whole proteomes in a biological sample. In this study, we have developed a new analytical interface tool called "Epsilon-Q" to enhance the functions of both the Combo-Spec Search method and label-free protein quantification. Epsilon-Q performs automatically multiple spectral library searching, class-specific false-discovery rate control, and result integration. It has a user-friendly graphical interface and demonstrates good performance in identifying and quantifying proteins by supporting standard MS data formats and spectrum-to-spectrum matching powered by SpectraST. Furthermore, when the Epsilon-Q interface is combined with the Combo-Spec search method, called the Epsilon-Q system, it shows a synergistic function by outperforming other sequence DB search engines for identifying and quantifying low-abundance proteins in biological samples. The Epsilon-Q system can be a versatile tool for comparative proteome analysis based on multiple spectral libraries and label-free quantification.
Pain: A content review of undergraduate pre-registration nurse education in the United Kingdom.
Mackintosh-Franklin, Carolyn
2017-01-01
Pain is a global health issue with poor assessment and management of pain associated with serious disability and detrimental socio economic consequences. Pain is also a closely associated symptom of the three major causes of death in the developed world; Coronary Heart Disease, Stroke and Cancer. There is a significant body of work which indicates that current nursing practice has failed to address pain as a priority, resulting in poor practice and unnecessary patient suffering. Additionally nurse education appears to lack focus or emphasis on the importance of pain assessment and its management. A three step online search process was carried out across 71 Higher Education Institutes (HEIs) in the United Kingdom (UK) which deliver approved undergraduate nurse education programmes. Step one to find detailed programme documentation, step 2 to find reference to pain in the detailed documents and step 3 to find reference to pain in nursing curricula across all UK HEI websites, using Google and each HEIs site specific search tool. The word pain featured minimally in programme documents with 9 (13%) documents making reference to it, this includes 3 occurrences which were not relevant to the programme content. The word pain also featured minimally in the content of programmes/modules on the website search, with no references at all to pain in undergraduate pre-registration nursing programmes. Those references found during the website search were for continuing professional development (CPD) or Masters level programmes. In spite of the global importance of pain as a major health issue both in its own right, and as a significant symptom of leading causes of death and illness, pain appears to be a neglected area within the undergraduate nursing curriculum. Evidence suggests that improving nurse education in this area can have positive impacts on clinical practice, however without educational input the current levels of poor practice are unlikely to improve and unnecessary patient suffering will continue. Undergraduate nurse education in the UK needs to review its current approach to content and ensure that pain is appropriately and prominently featured within pre-registration nurse education. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automated Patent Searching in the EPO: From Online Searching to Document Delivery.
ERIC Educational Resources Information Center
Nuyts, Annemie; Jonckheere, Charles
The European Patent Office (EPO) has recently implemented the last part of its ambitious automation project aimed at creating an automated search environment for approximately 1200 EPO patent search examiners. The examiners now have at their disposal an integrated set of tools offering a full range of functionalities from online searching, via…
Aggregation Tool to Create Curated Data albums to Support Disaster Recovery and Response
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon
2014-01-01
Despite advances in science and technology of prediction and simulation of natural hazards, losses incurred due to natural disasters keep growing every year. Natural disasters cause more economic losses as compared to anthropogenic disasters. Economic losses due to natural hazards are estimated to be around $6-$10 billion dollars annually for the U.S. and this number keeps increasing every year. This increase has been attributed to population growth and migration to more hazard prone locations such as coasts. As this trend continues, in concert with shifts in weather patterns caused by climate change, it is anticipated that losses associated with natural disasters will keep growing substantially. One of challenges disaster response and recovery analysts face is to quickly find, access and utilize a vast variety of relevant geospatial data collected by different federal agencies such as DoD, NASA, NOAA, EPA, USGS etc. Some examples of these data sets include high spatio-temporal resolution multi/hyperspectral satellite imagery, model prediction outputs from weather models, latest radar scans, measurements from an array of sensor networks such as Integrated Ocean Observing System etc. More often analysts may be familiar with limited, but specific datasets and are often unaware of or unfamiliar with a large quantity of other useful resources. Finding airborne or satellite data useful to a natural disaster event often requires a time consuming search through web pages and data archives. Additional information related to damages, deaths, and injuries requires extensive online searches for news reports and official report summaries. An analyst must also sift through vast amounts of potentially useful digital information captured by the general public such as geo-tagged photos, videos and real time damage updates within twitter feeds. Collecting and aggregating these information fragments can provide useful information in assessing damage in real time and help direct recovery efforts. The search process for the analyst could be made much more efficient and productive if a tool could go beyond a typical search engine and provide not just links to web sites but actual links to specific data relevant to the natural disaster, parse unstructured reports for useful information nuggets, as well as gather other related reports, summaries, news stories, and images. This presentation will describe a semantic aggregation tool developed to address similar problem for Earth Science researchers. This tool provides automated curation, and creates "Data Albums" to support case studies. The generated "Data Albums" are compiled collections of information related to a specific science topic or event, containing links to relevant data files (granules) from different instruments; tools and services for visualization and analysis; information about the event contained in news reports, and images or videos to supplement research analysis. An ontology-based relevancy-ranking algorithm drives the curation of relevant data sets for a given event. This tool is now being used to generate a catalog of Hurricane Case Studies at Global Hydrology Resource Center (GHRC), one of NASA's Distribute Active Archive Centers. Another instance of the Data Albums tool is currently being created in collaboration with NASA/MSFC's SPoRT Center, which conducts research on unique NASA products and capabilities that can be transitioned to the operational community to solve forecast problems. This new instance focuses on severe weather to support SPoRT researchers in their model evaluation studies
Improve homology search sensitivity of PacBio data by correcting frameshifts.
Du, Nan; Sun, Yanni
2016-09-01
Single-molecule, real-time sequencing (SMRT) developed by Pacific BioSciences produces longer reads than secondary generation sequencing technologies such as Illumina. The long read length enables PacBio sequencing to close gaps in genome assembly, reveal structural variations, and identify gene isoforms with higher accuracy in transcriptomic sequencing. However, PacBio data has high sequencing error rate and most of the errors are insertion or deletion errors. During alignment-based homology search, insertion or deletion errors in genes will cause frameshifts and may only lead to marginal alignment scores and short alignments. As a result, it is hard to distinguish true alignments from random alignments and the ambiguity will incur errors in structural and functional annotation. Existing frameshift correction tools are designed for data with much lower error rate and are not optimized for PacBio data. As an increasing number of groups are using SMRT, there is an urgent need for dedicated homology search tools for PacBio data. In this work, we introduce Frame-Pro, a profile homology search tool for PacBio reads. Our tool corrects sequencing errors and also outputs the profile alignments of the corrected sequences against characterized protein families. We applied our tool to both simulated and real PacBio data. The results showed that our method enables more sensitive homology search, especially for PacBio data sets of low sequencing coverage. In addition, we can correct more errors when comparing with a popular error correction tool that does not rely on hybrid sequencing. The source code is freely available at https://sourceforge.net/projects/frame-pro/ yannisun@msu.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
National Centers for Environmental Prediction
Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Model Configuration Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment
Petersen, Jakob; Simons, Hilary; Patel, Dipti; Freedman, Joanne
2017-01-01
Objectives The Zika virus (ZIKV) outbreak in the Americas in 2015–2016 posed a novel global threat due to the association with congenital malformations and its rapid spread. Timely information about the spread of the disease was paramount to public health bodies issuing travel advisories. This paper looks at the online interaction with a national travel health website during the outbreak and compares this to trends in internet searches and news media output. Methods Time trends were created for weekly views of ZIKV-related pages on a UK travel health website, relative search volumes for ‘Zika’ on Google UK, ZIKV-related items aggregated by Google UK News and rank of ZIKV travel advisories among all other pages between 15 November 2015 and 20 August 2016. Results Time trends in traffic to the travel health website corresponded with Google searches, but less so with media items due to intense coverage of the Rio Olympics. Travel advisories for pregnant women were issued from 7 December 2015 and began to increase in popularity (rank) from early January 2016, weeks before a surge in interest as measured by Google searches/news items at the end of January 2016. Conclusions The study showed an amplification of perceived risk among users of a national travel health website weeks before the initial surge in public interest. This suggests a potential value for tools to detect changes in online information seeking behaviours for predicting periods of high demand where the routine capability of travel health services could be exceeded. PMID:28860226
Petersen, Jakob; Simons, Hilary; Patel, Dipti; Freedman, Joanne
2017-08-31
The Zika virus (ZIKV) outbreak in the Americas in 2015-2016 posed a novel global threat due to the association with congenital malformations and its rapid spread. Timely information about the spread of the disease was paramount to public health bodies issuing travel advisories. This paper looks at the online interaction with a national travel health website during the outbreak and compares this to trends in internet searches and news media output. Time trends were created for weekly views of ZIKV-related pages on a UK travel health website, relative search volumes for 'Zika' on Google UK, ZIKV-related items aggregated by Google UK News and rank of ZIKV travel advisories among all other pages between 15 November 2015 and 20 August 2016. Time trends in traffic to the travel health website corresponded with Google searches, but less so with media items due to intense coverage of the Rio Olympics. Travel advisories for pregnant women were issued from 7 December 2015 and began to increase in popularity (rank) from early January 2016, weeks before a surge in interest as measured by Google searches/news items at the end of January 2016. The study showed an amplification of perceived risk among users of a national travel health website weeks before the initial surge in public interest. This suggests a potential value for tools to detect changes in online information seeking behaviours for predicting periods of high demand where the routine capability of travel health services could be exceeded. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Aslakson, Rebecca A; Dy, Sydney M; Wilson, Renee F; Waldfogel, Julie; Zhang, Allen; Isenberg, Sarina R; Blair, Alex; Sixon, Joshua; Lorenz, Karl A; Robinson, Karen A
2017-12-01
Assessment tools are data collection instruments that are completed by or with patients or caregivers and which collect data at the individual patient or caregiver level. The objectives of this study are to 1) summarize palliative care assessment tools completed by or with patients or caregivers and 2) identify needs for future tool development and evaluation. We completed 1) a systematic review of systematic reviews; 2) a supplemental search of previous reviews and Web sites, and/or 3) a targeted search for primary articles when no tools existed in a domain. Paired investigators screened search results, assessed risk of bias, and abstracted data. We organized tools by domains from the National Consensus Project Clinical Practice Guidelines for Palliative Care and selected the most relevant, recent, and highest quality systematic review for each domain. We included 10 systematic reviews and identified 152 tools (97 from systematic reviews and 55 from supplemental sources). Key gaps included no systematic review for pain and few tools assessing structural, cultural, spiritual, or ethical/legal domains, or patient-reported experience with end-of-life care. Psychometric information was available for many tools, but few studies evaluated responsiveness (sensitivity to change) and no studies compared tools. Few to no tools address the spiritual, ethical, or cultural domains or patient-reported experience with end-of-life care. While some data exist on psychometric properties of tools, the responsiveness of different tools to change and/or comparisons between tools have not been evaluated. Future research should focus on developing or testing tools that address domains for which few tools exist, evaluating responsiveness, and comparing tools. Copyright © 2017 American Academy of Hospice and Palliative Medicine. All rights reserved.
Large-scale feature searches of collections of medical imagery
NASA Astrophysics Data System (ADS)
Hedgcock, Marcus W.; Karshat, Walter B.; Levitt, Tod S.; Vosky, D. N.
1993-09-01
Large scale feature searches of accumulated collections of medical imagery are required for multiple purposes, including clinical studies, administrative planning, epidemiology, teaching, quality improvement, and research. To perform a feature search of large collections of medical imagery, one can either search text descriptors of the imagery in the collection (usually the interpretation), or (if the imagery is in digital format) the imagery itself. At our institution, text interpretations of medical imagery are all available in our VA Hospital Information System. These are downloaded daily into an off-line computer. The text descriptors of most medical imagery are usually formatted as free text, and so require a user friendly database search tool to make searches quick and easy for any user to design and execute. We are tailoring such a database search tool (Liveview), developed by one of the authors (Karshat). To further facilitate search construction, we are constructing (from our accumulated interpretation data) a dictionary of medical and radiological terms and synonyms. If the imagery database is digital, the imagery which the search discovers is easily retrieved from the computer archive. We describe our database search user interface, with examples, and compare the efficacy of computer assisted imagery searches from a clinical text database with manual searches. Our initial work on direct feature searches of digital medical imagery is outlined.
Customized Resources | OSTI, US Dept of Energy Office of Scientific and
Technical Information skip to main content Sign In Create Account OSTI.GOV title logo U.S . Department of Energy Office of Scientific and Technical Information Search terms: Advanced search options Tools Public Access Policy Data Services & Dev Tools About FAQs News Sign In Create Account This
DOE Collections | OSTI, US Dept of Energy Office of Scientific and
Technical Information skip to main content Sign In Create Account OSTI.GOV title logo U.S . Department of Energy Office of Scientific and Technical Information Search terms: Advanced search options Tools Public Access Policy Data Services & Dev Tools About FAQs News Sign In Create Account This
Contact Us | OSTI, US Dept of Energy Office of Scientific and Technical
Information skip to main content Sign In Create Account OSTI.GOV title logo U.S. Department of Energy Office of Scientific and Technical Information Search terms: Advanced search options Advanced Tools Public Access Policy Data Services & Dev Tools About FAQs News Sign In Create Account Contact
NASA Astrophysics Data System (ADS)
Gordon, M. K.; Showalter, M. R.; Ballard, L.; Tiscareno, M.; French, R. S.; Olson, D.
2017-06-01
The PDS RMS Node hosts OPUS - an accurate, comprehensive search tool for spacecraft remote sensing observations. OPUS supports Cassini: CIRS, ISS, UVIS, VIMS; New Horizons: LORRI, MVIC; Galileo SSI; Voyager ISS; and Hubble: ACS, STIS, WFC3, WFPC2.
Liverpool's Discovery: A University Library Applies a New Search Tool to Improve the User Experience
ERIC Educational Resources Information Center
Kenney, Brian
2011-01-01
This article features the University of Liverpool's arts and humanities library, which applies a new search tool to improve the user experience. In nearly every way imaginable, the Sydney Jones Library and the Harold Cohen Library--the university's two libraries that serve science, engineering, and medical students--support the lives of their…
E-Portfolio, a Valuable Job Search Tool for College Students
ERIC Educational Resources Information Center
Yu, Ti
2012-01-01
Purpose: The purpose of this paper is to find answers to the following questions: How do employers think about e-portfolios? Do employers really see e-portfolios as a suitable hiring tool? Which factors in students' e-portfolios attract potential employers? Can e-portfolios be successfully used by students in their search for a job?…
A knowledge based search tool for performance measures in health care systems.
Beyan, Oya D; Baykal, Nazife
2012-02-01
Performance measurement is vital for improving the health care systems. However, we are still far from having accepted performance measurement models. Researchers and developers are seeking comparable performance indicators. We developed an intelligent search tool to identify appropriate measures for specific requirements by matching diverse care settings. We reviewed the literature and analyzed 229 performance measurement studies published after 2000. These studies are evaluated with an original theoretical framework and stored in the database. A semantic network is designed for representing domain knowledge and supporting reasoning. We have applied knowledge based decision support techniques to cope with uncertainty problems. As a result we designed a tool which simplifies the performance indicator search process and provides most relevant indicators by employing knowledge based systems.
VizieR Online Data Catalog: Jame Clerk Maxwell Telescope Science Archive (CADC, 2003)
NASA Astrophysics Data System (ADS)
Canadian Astronomy Data, Centre
2018-01-01
The JCMT Science Archive (JSA), a collaboration between the CADC and EOA, is the official distribution site for observational data obtained with the James Clerk Maxwell Telescope (JCMT) on Mauna Kea, Hawaii. The JSA search interface is provided by the CADC Search tool, which provides generic access to the complete set of telescopic data archived at the CADC. Help on the use of this tool is provided via tooltips. For additional information on instrument capabilities and data reduction, please consult the SCUBA-2 and ACSIS instrument pages provided on the JAC maintained JCMT pages. JCMT-specific help related to the use of the CADC AdvancedSearch tool is available from the JAC. (1 data file).
World Wide Web Search Engines: AltaVista and Yahoo.
ERIC Educational Resources Information Center
Machovec, George S., Ed.
1996-01-01
Examines the history, structure, and search capabilities of Internet search tools AltaVista and Yahoo. AltaVista provides relevance-ranked feedback on full-text searches. Yahoo indexes Web "citations" only but does organize information hierarchically into predefined categories. Yahoo has recently become a publicly held company and…
Utilization of a radiology-centric search engine.
Sharpe, Richard E; Sharpe, Megan; Siegel, Eliot; Siddiqui, Khan
2010-04-01
Internet-based search engines have become a significant component of medical practice. Physicians increasingly rely on information available from search engines as a means to improve patient care, provide better education, and enhance research. Specialized search engines have emerged to more efficiently meet the needs of physicians. Details about the ways in which radiologists utilize search engines have not been documented. The authors categorized every 25th search query in a radiology-centric vertical search engine by radiologic subspecialty, imaging modality, geographic location of access, time of day, use of abbreviations, misspellings, and search language. Musculoskeletal and neurologic imagings were the most frequently searched subspecialties. The least frequently searched were breast imaging, pediatric imaging, and nuclear medicine. Magnetic resonance imaging and computed tomography were the most frequently searched modalities. A majority of searches were initiated in North America, but all continents were represented. Searches occurred 24 h/day in converted local times, with a majority occurring during the normal business day. Misspellings and abbreviations were common. Almost all searches were performed in English. Search engine utilization trends are likely to mirror trends in diagnostic imaging in the region from which searches originate. Internet searching appears to function as a real-time clinical decision-making tool, a research tool, and an educational resource. A more thorough understanding of search utilization patterns can be obtained by analyzing phrases as actually entered as well as the geographic location and time of origination. This knowledge may contribute to the development of more efficient and personalized search engines.
Global and local "teachable moments": The role of Nobel Prize and national pride.
Baram-Tsabari, Ayelet; Segev, Elad
2018-05-01
This study examined to what extent Nobel Prize announcements and awards trigger global and local searches or "teachable moments" related to the laureates and their discoveries. We examined the longitudinal trends in Google searches for the names and discoveries of Nobel laureates from 2012 to 2017. The findings show that Nobel Prize events clearly trigger more searches for laureates, but also for their respective discoveries. We suggest that fascination with the Nobel prize creates a teachable moment not only for the underlying science, but also about the nature of science. Locality also emerged as playing a significant role in intensifying interest.
GWFASTA: server for FASTA search in eukaryotic and microbial genomes.
Issac, Biju; Raghava, G P S
2002-09-01
Similarity searches are a powerful method for solving important biological problems such as database scanning, evolutionary studies, gene prediction, and protein structure prediction. FASTA is a widely used sequence comparison tool for rapid database scanning. Here we describe the GWFASTA server that was developed to assist the FASTA user in similarity searches against partially and/or completely sequenced genomes. GWFASTA consists of more than 60 microbial genomes, eight eukaryote genomes, and proteomes of annotatedgenomes. Infact, it provides the maximum number of databases for similarity searching from a single platform. GWFASTA allows the submission of more than one sequence as a single query for a FASTA search. It also provides integrated post-processing of FASTA output, including compositional analysis of proteins, multiple sequences alignment, and phylogenetic analysis. Furthermore, it summarizes the search results organism-wise for prokaryotes and chromosome-wise for eukaryotes. Thus, the integration of different tools for sequence analyses makes GWFASTA a powerful toolfor biologists.
Karthikeyan, Muthukumarasamy; Pandit, Yogesh; Pandit, Deepak; Vyas, Renu
2015-01-01
Virtual screening is an indispensable tool to cope with the massive amount of data being tossed by the high throughput omics technologies. With the objective of enhancing the automation capability of virtual screening process a robust portal termed MegaMiner has been built using the cloud computing platform wherein the user submits a text query and directly accesses the proposed lead molecules along with their drug-like, lead-like and docking scores. Textual chemical structural data representation is fraught with ambiguity in the absence of a global identifier. We have used a combination of statistical models, chemical dictionary and regular expression for building a disease specific dictionary. To demonstrate the effectiveness of this approach, a case study on malaria has been carried out in the present work. MegaMiner offered superior results compared to other text mining search engines, as established by F score analysis. A single query term 'malaria' in the portlet led to retrieval of related PubMed records, protein classes, drug classes and 8000 scaffolds which were internally processed and filtered to suggest new molecules as potential anti-malarials. The results obtained were validated by docking the virtual molecules into relevant protein targets. It is hoped that MegaMiner will serve as an indispensable tool for not only identifying hidden relationships between various biological and chemical entities but also for building better corpus and ontologies.
Optimization of Microelectronic Devices for Sensor Applications
NASA Technical Reports Server (NTRS)
Cwik, Tom; Klimeck, Gerhard
2000-01-01
The NASA/JPL goal to reduce payload in future space missions while increasing mission capability demands miniaturization of active and passive sensors, analytical instruments and communication systems among others. Currently, typical system requirements include the detection of particular spectral lines, associated data processing, and communication of the acquired data to other systems. Advances in lithography and deposition methods result in more advanced devices for space application, while the sub-micron resolution currently available opens a vast design space. Though an experimental exploration of this widening design space-searching for optimized performance by repeated fabrication efforts-is unfeasible, it does motivate the development of reliable software design tools. These tools necessitate models based on fundamental physics and mathematics of the device to accurately model effects such as diffraction and scattering in opto-electronic devices, or bandstructure and scattering in heterostructure devices. The software tools must have convenient turn-around times and interfaces that allow effective usage. The first issue is addressed by the application of high-performance computers and the second by the development of graphical user interfaces driven by properly developed data structures. These tools can then be integrated into an optimization environment, and with the available memory capacity and computational speed of high performance parallel platforms, simulation of optimized components can proceed. In this paper, specific applications of the electromagnetic modeling of infrared filtering, as well as heterostructure device design will be presented using genetic algorithm global optimization methods.
Optimal Foraging in Semantic Memory
ERIC Educational Resources Information Center
Hills, Thomas T.; Jones, Michael N.; Todd, Peter M.
2012-01-01
Do humans search in memory using dynamic local-to-global search strategies similar to those that animals use to forage between patches in space? If so, do their dynamic memory search policies correspond to optimal foraging strategies seen for spatial foraging? Results from a number of fields suggest these possibilities, including the shared…
National Centers for Environmental Prediction
Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Configuration Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment Change Log
National Centers for Environmental Prediction
Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment Change Log Contacts
Assessment Tools for Evaluation of Oral Feeding in Infants Less than Six Months Old
Pados, Britt F.; Park, Jinhee; Estrem, Hayley; Awotwi, Araba
2015-01-01
Background Feeding difficulty is common in infants less than six months old. Identification of infants in need of specialized treatment is critical to ensure appropriate nutrition and feeding skill development. Valid and reliable assessment tools help clinicians objectively evaluate feeding. Purpose To identify and evaluate assessment tools available for clinical assessment of bottle- and breast-feeding in infants less than six months old. Methods/Search Strategy CINAHL, HaPI, PubMed, and Web of Science were searched for “infant feeding” and “assessment tool.” The literature (n=237) was reviewed for relevant assessment tools. A secondary search was conducted in CINAHL and PubMed for additional literature on identified tools. Findings/Results Eighteen assessment tools met inclusion criteria. Of these, seven were excluded because of limited available literature or because they were intended for use with a specific diagnosis or in research only. There are 11 assessment tools available for clinical practice. Only two of these were intended for bottle-feeding. All 11 indicated they were appropriate for use with breast-feeding. None of the available tools have adequate psychometric development and testing. Implications for Practice All of the tools should be used with caution. The Early Feeding Skills Assessment and Bristol Breastfeeding Assessment Tool had the most supportive psychometric development and testing. Implications for Research Feeding assessment tools need to be developed and tested to guide optimal clinical care of infants from birth through six months. A tool that assesses both bottle- and breast-feeding would allow for consistent assessment across feeding methods. PMID:26945280
Ruan, Jujun; Zhang, Chao; Li, Ya; Li, Peiyi; Yang, Zaizhi; Chen, Xiaohong; Huang, Mingzhi; Zhang, Tao
2017-02-01
This work proposes an on-line hybrid intelligent control system based on a genetic algorithm (GA) evolving fuzzy wavelet neural network software sensor to control dissolved oxygen (DO) in an anaerobic/anoxic/oxic process for treating papermaking wastewater. With the self-learning and memory abilities of neural network, handling the uncertainty capacity of fuzzy logic, analyzing local detail superiority of wavelet transform and global search of GA, this proposed control system can extract the dynamic behavior and complex interrelationships between various operation variables. The results indicate that the reasonable forecasting and control performances were achieved with optimal DO, and the effluent quality was stable at and below the desired values in real time. Our proposed hybrid approach proved to be a robust and effective DO control tool, attaining not only adequate effluent quality but also minimizing the demand for energy, and is easily integrated into a global monitoring system for purposes of cost management. Copyright © 2016 Elsevier Ltd. All rights reserved.
Assessing the nutritional status of hospitalized elderly
Abd Aziz, Nur Adilah Shuhada; Teng, Nur Islami Mohd Fahmi; Abdul Hamid, Mohd Ramadan; Ismail, Nazrul Hadi
2017-01-01
Purpose The increasing number of elderly people worldwide throughout the years is concerning due to the health problems often faced by this population. This review aims to summarize the nutritional status among hospitalized elderly and the role of the nutritional assessment tools in this issue. Methods A literature search was performed on six databases using the terms “malnutrition”, “hospitalised elderly”, “nutritional assessment”, “Mini Nutritional Assessment (MNA)”, “Geriatric Nutrition Risk Index (GNRI)”, and “Subjective Global Assessment (SGA)”. Results According to the previous studies, the prevalence of malnutrition among hospitalized elderly shows an increasing trend not only locally but also across the world. Under-recognition of malnutrition causes the number of malnourished hospitalized elderly to remain high throughout the years. Thus, the development of nutritional screening and assessment tools has been widely studied, and these tools are readily available nowadays. SGA, MNA, and GNRI are the nutritional assessment tools developed specifically for the elderly and are well validated in most countries. However, to date, there is no single tool that can be considered as the universal gold standard for the diagnosis of nutritional status in hospitalized patients. Conclusion It is important to identify which nutritional assessment tool is suitable to be used in this group to ensure that a structured assessment and documentation of nutritional status can be established. An early and accurate identification of the appropriate treatment of malnutrition can be done as soon as possible, and thus, the malnutrition rate among this group can be minimized in the future. PMID:29042762
Patscanui: an intuitive web interface for searching patterns in DNA and protein data.
Blin, Kai; Wohlleben, Wolfgang; Weber, Tilmann
2018-05-02
Patterns in biological sequences frequently signify interesting features in the underlying molecule. Many tools exist to search for well-known patterns. Less support is available for exploratory analysis, where no well-defined patterns are known yet. PatScanUI (https://patscan.secondarymetabolites.org/) provides a highly interactive web interface to the powerful generic pattern search tool PatScan. The complex PatScan-patterns are created in a drag-and-drop aware interface allowing researchers to do rapid prototyping of the often complicated patterns useful to identifying features of interest.
Global trends in the awareness of sepsis: insights from search engine data between 2012 and 2017.
Jabaley, Craig S; Blum, James M; Groff, Robert F; O'Reilly-Shah, Vikas N
2018-01-17
Sepsis is an established global health priority with high mortality that can be curtailed through early recognition and intervention; as such, efforts to raise awareness are potentially impactful and increasingly common. We sought to characterize trends in the awareness of sepsis by examining temporal, geographic, and other changes in search engine utilization for sepsis information-seeking online. Using time series analyses and mixed descriptive methods, we retrospectively analyzed publicly available global usage data reported by Google Trends (Google, Palo Alto, CA, USA) concerning web searches for the topic of sepsis between 24 June 2012 and 24 June 2017. Google Trends reports aggregated and de-identified usage data for its search products, including interest over time, interest by region, and details concerning the popularity of related queries where applicable. Outlying epochs of search activity were identified using autoregressive integrated moving average modeling with transfer functions. We then identified awareness campaigns and news media coverage that correlated with epochs of significantly heightened search activity. A second-order autoregressive model with transfer functions was specified following preliminary outlier analysis. Nineteen significant outlying epochs above the modeled baseline were identified in the final analysis that correlated with 14 awareness and news media events. Our model demonstrated that the baseline level of search activity increased in a nonlinear fashion. A recurrent cyclic increase in search volume beginning in 2012 was observed that correlates with World Sepsis Day. Numerous other awareness and media events were correlated with outlying epochs. The average worldwide search volume for sepsis was less than that of influenza, myocardial infarction, and stroke. Analyzing aggregate search engine utilization data has promise as a mechanism to measure the impact of awareness efforts. Heightened information-seeking about sepsis occurs in close proximity to awareness events and relevant news media coverage. Future work should focus on validating this approach in other contexts and comparing its results to traditional methods of awareness campaign evaluation.
Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space
NASA Astrophysics Data System (ADS)
Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.
2014-10-01
Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.
Wroe, Emily B; McBain, Ryan K; Michaelis, Annie; Dunbar, Elizabeth L; Hirschhorn, Lisa R; Cancedda, Corrado
2017-08-01
Despite rapid growth in the number of physicians and academic institutions entering the field of global health, there are few tools that inform global health curricula and assess physician readiness for this field. To address this gap, we describe the development and pilot testing of a new tool to assess nontechnical competencies and values in global health. Competencies assessed include systems-based practice, interpersonal and cross-cultural communication, professionalism and self-care, patient care, mentoring, teaching, management, and personal motivation and experience. The Global Health Delivery Competency Assessment Tool presents 15 case vignettes and open-ended questions related to situations a global health practitioner might encounter, and grades the quality of responses on a 6-point ordinal scale. We interviewed 17 of 18 possible global health residents (94%), matched with 17 residents not training in global health, for a total of 34 interviews. A second reviewer independently scored recordings of 13 interviews for reliability. Pilot testing indicated a high degree of discriminant validity, as measured by the instrument's ability to distinguish between residents who were and were not enrolled in a global health program ( P < .001). It also demonstrated acceptable consistency, as assessed by interrater reliability (κ = 0.53), with a range of item-level agreement from 84%-96%. The tool has potential applicability to a variety of academic and programmatic activities, including evaluation of candidates for global health positions and evaluating the success of training programs in equipping practitioners for entry into this field.
Search optimization of named entities from twitter streams
NASA Astrophysics Data System (ADS)
Fazeel, K. Mohammed; Hassan Mottur, Simama; Norman, Jasmine; Mangayarkarasi, R.
2017-11-01
With Enormous number of tweets, People often face difficulty to get exact information about those tweets. One of the approach followed for getting information about those tweets via Google. There is not any accuracy tool developed for search optimization and as well as getting information about those tweets. So, this system contains the search optimization and functionalities for getting information about those tweets. Another problem faced here are the tweets that contains grammatical errors, misspellings, non-standard abbreviations, and meaningless capitalization. So, these problems can be eliminated by the use of this tool. Lot of time can be saved and as well as by the use of efficient search optimization each information about those particular tweets can be obtained.
Lorence, Daniel; Abraham, Joanna
2006-01-01
Medical and health-related searches pose a special case of risk when using the web as an information resource. Uninsured consumers, lacking access to a trained provider, will often rely on information from the internet for self-diagnosis and treatment. In areas where treatments are uncertain or controversial, most consumers lack the knowledge to make an informed decision. This exploratory technology assessment examines the use of Keyword Effectiveness Indexing (KEI) analysis as a potential tool for profiling information search and keyword retrieval patterns. Results demonstrate that the KEI methodology can be useful in identifying e-health search patterns, but is limited by semantic or text-based web environments.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-03
...'' field when using either the Web-based search (advanced search) engine or the ADAMS FIND tool in Citrix... should enter ``05200011'' in the ``Docket Number'' field in the web-based search (advanced search) engine... ML100740441. To search for documents in ADAMS using Vogtle Units 3 and 4 COL application docket numbers, 52...
RAG-3D: A search tool for RNA 3D substructures
Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; ...
2015-08-24
In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less
RAG-3D: a search tool for RNA 3D substructures
Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar
2015-01-01
To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547
RAG-3D: A search tool for RNA 3D substructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef
In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less
Boeker, Martin; Vach, Werner; Motschall, Edith
2013-10-26
Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools.The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary.
Posterior Tibial Tendon Dysfunction (PTTD)
... treatment, surgery may be required. For some advanced cases, surgery may be the only option. Your foot and ankle surgeon will determine the best approach for you. Find an ACFAS Physician Search Search Tools Find an ACFAS Physician: Search by Mail Address ...
Mobile object retrieval in server-based image databases
NASA Astrophysics Data System (ADS)
Manger, D.; Pagel, F.; Widak, H.
2013-05-01
The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.
A Systematic Review of Studies Using the Multidimensional Assessment of Fatigue Scale.
Belza, Basia; Miyawaki, Christina E; Liu, Minhui; Aree-Ue, Suparb; Fessel, Melissa; Minott, Kenya R; Zhang, Xi
2018-04-01
To review how the Multidimensional Assessment of Fatigue (MAF) has been used and evaluate its psychometric properties. We conducted a database search using "multidimensional assessment of fatigue" or "MAF" as key terms from 1993 to 2015, and located 102 studies. Eighty-three were empirical studies and 19 were reviews/evaluations. Research was conducted in 17 countries; 32 diseases were represented. Nine language versions of the MAF were used. The mean of the Global Fatigue Index ranged from 10.9 to 49.4. The MAF was reported to be easy-to-use, had strong reliability and validity, and was used in populations who spoke languages other than English. The MAF is an acceptable assessment tool to measure fatigue and intervention effectiveness in various languages, diseases, and settings across the world.
Probing composite models at the LHC with exotic quarks production
NASA Astrophysics Data System (ADS)
Kukla, Romain
2017-03-01
After the Higgs boson hunt, the LHC could be a powerful tool to unravel the mystery of which physics lies beyond the realm of the Standard Model. Different new sectors have been postulated to address naturalness: SUSY, extra dimensions and strong dynamics theories. Composite models extend EWSB to a global symmetry breaking whose pseudo-Goldstone boson is the SM Higgs boson. The resulting mass spectrum originates from a partial mixing between fundamental fermions and composite fields which creates massive states including new heavy quarks coupled preferentially to the top quark. Searches for these top partners have been carried out by the ATLAS and CMS collaborations, constraining the models. Other composite contributions are expected to enhance the 4-top production, which should be observable in the next years at the LHC.
BnmrOffice: A Free Software for β-nmr Data Analysis
NASA Astrophysics Data System (ADS)
Saadaoui, Hassan
A data-analysis framework with a graphical user interface (GUI) is developed to analyze β-nmr spectra in an automated and intuitive way. This program, named BnmrOffice is written in C++ and employs the QT libraries and tools for designing the GUI, and the CERN's Minuit optimization routines for minimization. The program runs under multiple platforms, and is available for free under the terms of the GNU GPL standards. The GUI is structured in tabs to search, plot and analyze data, along other functionalities. The user can tweak the minimization options; and fit multiple data files (or runs) using single or global fitting routines with pre-defined or new models. Currently, BnmrOffice reads TRIUMF's MUD data and ASCII files, and can be extended to other formats.
In search of a general theory of species' range evolution.
Connallon, Tim; Sgrò, Carla M
2018-06-13
Despite the pervasiveness of the world's biodiversity, no single species has a truly global distribution. In fact, most species have very restricted distributions. What limits species from expanding beyond their current geographic ranges? This has been classically treated by ecologists as an ecological problem and by evolutionary biologist as an evolutionary problem. Such a dichotomy is false-the problem of species' ranges sits firmly within the realm of evolutionary ecology. In support of this view, Polechová presents new theory that explains species' range limits with reference to two key factors central to both ecological and evolutionary theory-migration and population size. This new model sets the scene for empirical tests of range limit theory and builds the case for assisted gene flow as a key management tool for threatened species.
Mukherjee, Joydeep; Llewellyn, Lyndon E; Evans-Illidge, Elizabeth A
2008-01-01
Microbial marine biodiscovery is a recent scientific endeavour developing at a time when information and other technologies are also undergoing great technical strides. Global visualisation of datasets is now becoming available to the world through powerful and readily available software such as Worldwind™, ArcGIS Explorer™ and Google Earth™. Overlaying custom information upon these tools is within the hands of every scientist and more and more scientific organisations are making data available that can also be integrated into these global visualisation tools. The integrated global view that these tools enable provides a powerful desktop exploration tool. Here we demonstrate the value of this approach to marine microbial biodiscovery by developing a geobibliography that incorporates citations on tropical and near-tropical marine microbial natural products research with Google Earth™ and additional ancillary global data sets. The tools and software used are all readily available and the reader is able to use and install the material described in this article. PMID:19172194
Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Contraints
NASA Technical Reports Server (NTRS)
Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren
2015-01-01
Interplanetary missions are often subject to difficult constraints, like solar phase angle upon arrival at the destination, velocity at arrival, and altitudes for flybys. Preliminary design of such missions is often conducted by solving the unconstrained problem and then filtering away solutions which do not naturally satisfy the constraints. However this can bias the search into non-advantageous regions of the solution space, so it can be better to conduct preliminary design with the full set of constraints imposed. In this work two stochastic global search methods are developed which are well suited to the constrained global interplanetary trajectory optimization problem.
Organizational Readiness Tools for Global Health Intervention: A Review
Dearing, James W.
2018-01-01
The ability of non-governmental organizations, government agencies, and corporations to deliver and support the availability and use of interventions for improved global public health depends on their readiness to do so. Yet readiness has proven to be a rather fluid concept in global public health, perhaps due to its multidimensional nature and because scholars and practitioners have applied the concept at different levels such as the individual, organization, and community. This review concerns 30 publically available tools created for the purpose of organizational readiness assessment in order to carry out global public health objectives. Results suggest that these tools assess organizational capacity in the absence of measuring organizational motivation, thus overlooking a key aspect of organizational readiness. Moreover, the tools reviewed are mostly untested by their developers to establish whether the tools do, in fact, measure capacity. These results suggest opportunities for implementation science researchers. PMID:29552552
Exploring Google to Enhance Reference Services
ERIC Educational Resources Information Center
Jia, Peijun
2011-01-01
Google is currently recognized as the world's most powerful search engine. Google is so powerful and intuitive that one does not need to possess many skills to use it. However, Google is more than just simple search. For those who have special search skills and know Google's superior search features, it becomes an extraordinary tool. To understand…
The library as a reference tool: online catalogs
Stark, M.
1991-01-01
Online catalogs are computerized listings of materials in a particular library or group of libraries. General characteristics of online catalogs include ability for searching interactively and for locating descriptions of books, maps, and reports on regional or topical geology. Suggestions for searching, evaluating results, modifying searches, and limitations of searching are presented. -Author
Taming the Information Jungle with WWW Search Engines.
ERIC Educational Resources Information Center
Repman, Judi; And Others
1997-01-01
Because searching the Web with different engines often produces different results, the best strategy is to learn how each engine works. Discusses comparing search engines; qualities to consider (ease of use, relevance of hits, and speed); and six of the most popular search tools (Yahoo, Magellan. InfoSeek, Alta Vista, Lycos, and Excite). Lists…
Search Engines for Tomorrow's Scholars, Part Two
ERIC Educational Resources Information Center
Fagan, Jody Condit
2012-01-01
This two-part article considers how well some of today's search tools support scholars' work. The first part of the article reviewed Google Scholar and Microsoft Academic Search using a modified version of Carole L. Palmer, Lauren C. Teffeau, and Carrier M. Pirmann's framework (2009). Microsoft Academic Search is a strong contender when…
The I4 Online Query Tool for Earth Observations Data
NASA Technical Reports Server (NTRS)
Stefanov, William L.; Vanderbloemen, Lisa A.; Lawrence, Samuel J.
2015-01-01
The NASA Earth Observation System Data and Information System (EOSDIS) delivers an average of 22 terabytes per day of data collected by orbital and airborne sensor systems to end users through an integrated online search environment (the Reverb/ECHO system). Earth observations data collected by sensors on the International Space Station (ISS) are not currently included in the EOSDIS system, and are only accessible through various individual online locations. This increases the effort required by end users to query multiple datasets, and limits the opportunity for data discovery and innovations in analysis. The Earth Science and Remote Sensing Unit of the Exploration Integration and Science Directorate at NASA Johnson Space Center has collaborated with the School of Earth and Space Exploration at Arizona State University (ASU) to develop the ISS Instrument Integration Implementation (I4) data query tool to provide end users a clean, simple online interface for querying both current and historical ISS Earth Observations data. The I4 interface is based on the Lunaserv and Lunaserv Global Explorer (LGE) open-source software packages developed at ASU for query of lunar datasets. In order to avoid mirroring existing databases - and the need to continually sync/update those mirrors - our design philosophy is for the I4 tool to be a pure query engine only. Once an end user identifies a specific scene or scenes of interest, I4 transparently takes the user to the appropriate online location to download the data. The tool consists of two public-facing web interfaces. The Map Tool provides a graphic geobrowser environment where the end user can navigate to an area of interest and select single or multiple datasets to query. The Map Tool displays active image footprints for the selected datasets (Figure 1). Selecting a footprint will open a pop-up window that includes a browse image and a link to available image metadata, along with a link to the online location to order or download the actual data. Search results are either delivered in the form of browse images linked to the appropriate online database, similar to the Map Tool, or they may be transferred within the I4 environment for display as footprints in the Map Tool. Datasets searchable through I4 (http://eol.jsc.nasa.gov/I4_tool) currently include: Crew Earth Observations (CEO) cataloged and uncataloged handheld astronaut photography; Sally Ride EarthKAM; Hyperspectral Imager for the Coastal Ocean (HICO); and the ISS SERVIR Environmental Research and Visualization System (ISERV). The ISS is a unique platform in that it will have multiple users over its lifetime, and that no single remote sensing system has a permanent internal or external berth. The open source I4 tool is designed to enable straightforward addition of new datasets as they become available such as ISS-RapidSCAT, Cloud Aerosol Transport System (CATS), and the High Definition Earth Viewing (HDEV) system. Data from other sensor systems, such as those operated by the ISS International Partners or under the auspices of the US National Laboratory program, can also be added to I4 provided sufficient access to enable searching of data or metadata is available. Commercial providers of remotely sensed data from the ISS may be particularly interested in I4 as an additional means of directing potential customers and clients to their products.
Climate Model Diagnostic Analyzer
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Pan, Lei; Zhai, Chengxing; Tang, Benyang; Kubar, Terry; Zhang, Zia; Wang, Wei
2015-01-01
The comprehensive and innovative evaluation of climate models with newly available global observations is critically needed for the improvement of climate model current-state representation and future-state predictability. A climate model diagnostic evaluation process requires physics-based multi-variable analyses that typically involve large-volume and heterogeneous datasets, making them both computation- and data-intensive. With an exploratory nature of climate data analyses and an explosive growth of datasets and service tools, scientists are struggling to keep track of their datasets, tools, and execution/study history, let alone sharing them with others. In response, we have developed a cloud-enabled, provenance-supported, web-service system called Climate Model Diagnostic Analyzer (CMDA). CMDA enables the physics-based, multivariable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. At the same time, CMDA provides a crowd-sourcing space where scientists can organize their work efficiently and share their work with others. CMDA is empowered by many current state-of-the-art software packages in web service, provenance, and semantic search.
Outlook: directed development: catalysing a global biotech industry.
Sun, Anthony; Perkins, Tom
2005-09-01
Governments are increasingly relying on directed development tools or proactive public-policy approaches to stimulate scientific and economic development for their biotechnology industries. This article will discuss the four main tools of directed development in biotechnology and the lessons learned from current global efforts utilizing these tools.
Exploring FlyBase Data Using QuickSearch.
Marygold, Steven J; Antonazzo, Giulia; Attrill, Helen; Costa, Marta; Crosby, Madeline A; Dos Santos, Gilberto; Goodman, Joshua L; Gramates, L Sian; Matthews, Beverley B; Rey, Alix J; Thurmond, Jim
2016-12-08
FlyBase (flybase.org) is the primary online database of genetic, genomic, and functional information about Drosophila species, with a major focus on the model organism Drosophila melanogaster. The long and rich history of Drosophila research, combined with recent surges in genomic-scale and high-throughput technologies, mean that FlyBase now houses a huge quantity of data. Researchers need to be able to rapidly and intuitively query these data, and the QuickSearch tool has been designed to meet these needs. This tool is conveniently located on the FlyBase homepage and is organized into a series of simple tabbed interfaces that cover the major data and annotation classes within the database. This unit describes the functionality of all aspects of the QuickSearch tool. With this knowledge, FlyBase users will be equipped to take full advantage of all QuickSearch features and thereby gain improved access to data relevant to their research. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.
Global Emergency Medicine: A Review of the Literature From 2016.
Becker, Torben K; Hansoti, Bhakti; Bartels, Susan; Hayward, Alison Schroth; Hexom, Braden J; Lunney, Kevin M; Marsh, Regan H; Osei-Ampofo, Maxwell; Trehan, Indi; Chang, Julia; Levine, Adam C
2017-09-01
The Global Emergency Medicine Literature Review (GEMLR) conducts an annual search of peer-reviewed and gray literature relevant to global emergency medicine (EM) to identify, review, and disseminate the most important new research in this field to a global audience of academics and clinical practitioners. This year 13,890 articles written in four languages were identified by our search. These articles were distributed among 20 reviewers for initial screening based on their relevance to the field of global EM. An additional two reviewers searched the gray literature. All articles that were deemed appropriate by at least one reviewer and approved by their editor underwent formal scoring of overall quality and importance. Two independent reviewers scored all articles. A total of 716 articles met our inclusion criteria and underwent full review. Fifty-nine percent were categorized as emergency care in resource-limited settings, 17% as EM development, and 24% as disaster and humanitarian response. Nineteen articles received scores of 18.5 or higher out of a maximum score of 20 and were selected for formal summary and critique. Inter-rater reliability testing between reviewers revealed Cohen's kappa of 0.441. In 2016, the total number of articles identified by our search continued to increase. The proportion of articles in each of the three categories remained stable. Studies and reviews with a focus on infectious diseases, pediatrics, and the use of ultrasound in resource-limited settings represented the majority of articles selected for final review. © 2017 The Authors. Academic Emergency Medicine published by Wiley Periodicals, Inc. on behalf of the Society for Academic Emergency Medicine (SAEM).
A Meta-Data Driven Approach to Searching for Educational Resources in a Global Context.
ERIC Educational Resources Information Center
Wade, Vincent P.; Doherty, Paul
This paper presents the design of an Internet-enabled search service that supports educational resource discovery within an educational brokerage service. More specifically, it presents the design and implementation of a metadata-driven approach to implementing the distributed search and retrieval of Internet-based educational resources and…
A modified three-term PRP conjugate gradient algorithm for optimization models.
Wu, Yanlin
2017-01-01
The nonlinear conjugate gradient (CG) algorithm is a very effective method for optimization, especially for large-scale problems, because of its low memory requirement and simplicity. Zhang et al. (IMA J. Numer. Anal. 26:629-649, 2006) firstly propose a three-term CG algorithm based on the well known Polak-Ribière-Polyak (PRP) formula for unconstrained optimization, where their method has the sufficient descent property without any line search technique. They proved the global convergence of the Armijo line search but this fails for the Wolfe line search technique. Inspired by their method, we will make a further study and give a modified three-term PRP CG algorithm. The presented method possesses the following features: (1) The sufficient descent property also holds without any line search technique; (2) the trust region property of the search direction is automatically satisfied; (3) the steplengh is bounded from below; (4) the global convergence will be established under the Wolfe line search. Numerical results show that the new algorithm is more effective than that of the normal method.
NASA Astrophysics Data System (ADS)
Park, Sang-Gon; Jeong, Dong-Seok
2000-12-01
In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.
Kritz, Marlene; Gschwandtner, Manfred; Stefanov, Veronika; Hanbury, Allan; Samwald, Matthias
2013-06-26
There is a large body of research suggesting that medical professionals have unmet information needs during their daily routines. To investigate which online resources and tools different groups of European physicians use to gather medical information and to identify barriers that prevent the successful retrieval of medical information from the Internet. A detailed Web-based questionnaire was sent out to approximately 15,000 physicians across Europe and disseminated through partner websites. 500 European physicians of different levels of academic qualification and medical specialization were included in the analysis. Self-reported frequency of use of different types of online resources, perceived importance of search tools, and perceived search barriers were measured. Comparisons were made across different levels of qualification (qualified physicians vs physicians in training, medical specialists without professorships vs medical professors) and specialization (general practitioners vs specialists). Most participants were Internet-savvy, came from Austria (43%, 190/440) and Switzerland (31%, 137/440), were above 50 years old (56%, 239/430), stated high levels of medical work experience, had regular patient contact and were employed in nonacademic health care settings (41%, 177/432). All groups reported frequent use of general search engines and cited "restricted accessibility to good quality information" as a dominant barrier to finding medical information on the Internet. Physicians in training reported the most frequent use of Wikipedia (56%, 31/55). Specialists were more likely than general practitioners to use medical research databases (68%, 185/274 vs 27%, 24/88; χ²₂=44.905, P<.001). General practitioners were more likely than specialists to report "lack of time" as a barrier towards finding information on the Internet (59%, 50/85 vs 43%, 111/260; χ²₁=7.231, P=.007) and to restrict their search by language (48%, 43/89 vs 35%, 97/278; χ²₁=5.148, P=.023). They frequently consult general health websites (36%, 31/87 vs 19%, 51/269; χ²₂=12.813, P=.002) and online physician network communities (17%, 15/86, χ²₂=9.841 vs 6%, 17/270, P<.001). The reported inaccessibility of relevant, trustworthy resources on the Internet and frequent reliance on general search engines and social media among physicians require further attention. Possible solutions may be increased governmental support for the development and popularization of user-tailored medical search tools and open access to high-quality content for physicians. The potential role of collaborative tools in providing the psychological support and affirmation normally given by medical colleagues needs further consideration. Tools that speed up quality evaluation and aid selection of relevant search results need to be identified. In order to develop an adequate search tool, a differentiated approach considering the differing needs of physician subgroups may be beneficial.
Mollayeva, Tatyana; Thurairajah, Pravheen; Burton, Kirsteen; Mollayeva, Shirin; Shapiro, Colin M; Colantonio, Angela
2016-02-01
This review appraises the process of development and the measurement properties of the Pittsburgh sleep quality index (PSQI), gauging its potential as a screening tool for sleep dysfunction in non-clinical and clinical samples; it also compares non-clinical and clinical populations in terms of PSQI scores. MEDLINE, Embase, PsycINFO, and HAPI databases were searched. Critical appraisal of studies of measurement properties was performed using COSMIN. Of 37 reviewed studies, 22 examined construct validity, 19 - known-group validity, 15 - internal consistency, and three - test-retest reliability. Study quality ranged from poor to excellent, with the majority designated fair. Internal consistency, based on Cronbach's alpha, was good. Discrepancies were observed in factor analytic studies. In non-clinical and clinical samples with known differences in sleep quality, the PSQI global scores and all subscale scores, with the exception of sleep disturbance, differed significantly. The best evidence synthesis for the PSQI showed strong reliability and validity, and moderate structural validity in a variety of samples, suggesting the tool fulfills its intended utility. A taxonometric analysis can contribute to better understanding of sleep dysfunction as either a dichotomous or continuous construct. Copyright © 2015 Elsevier Ltd. All rights reserved.
A modified conjugate gradient coefficient with inexact line search for unconstrained optimization
NASA Astrophysics Data System (ADS)
Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa
2016-11-01
Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.
Egea, Jose A; Henriques, David; Cokelaer, Thomas; Villaverde, Alejandro F; MacNamara, Aidan; Danciu, Diana-Patricia; Banga, Julio R; Saez-Rodriguez, Julio
2014-05-10
Optimization is the key to solving many problems in computational biology. Global optimization methods, which provide a robust methodology, and metaheuristics in particular have proven to be the most efficient methods for many applications. Despite their utility, there is a limited availability of metaheuristic tools. We present MEIGO, an R and Matlab optimization toolbox (also available in Python via a wrapper of the R version), that implements metaheuristics capable of solving diverse problems arising in systems biology and bioinformatics. The toolbox includes the enhanced scatter search method (eSS) for continuous nonlinear programming (cNLP) and mixed-integer programming (MINLP) problems, and variable neighborhood search (VNS) for Integer Programming (IP) problems. Additionally, the R version includes BayesFit for parameter estimation by Bayesian inference. The eSS and VNS methods can be run on a single-thread or in parallel using a cooperative strategy. The code is supplied under GPLv3 and is available at http://www.iim.csic.es/~gingproc/meigo.html. Documentation and examples are included. The R package has been submitted to BioConductor. We evaluate MEIGO against optimization benchmarks, and illustrate its applicability to a series of case studies in bioinformatics and systems biology where it outperforms other state-of-the-art methods. MEIGO provides a free, open-source platform for optimization that can be applied to multiple domains of systems biology and bioinformatics. It includes efficient state of the art metaheuristics, and its open and modular structure allows the addition of further methods.
Reach and messages of the world's largest ivory burn.
Braczkowski, Alexander; Holden, Matthew H; O'Bryan, Christopher; Choi, Chi-Yeung; Gan, Xiaojing; Beesley, Nicholas; Gao, Yufang; Allan, James; Tyrrell, Peter; Stiles, Daniel; Brehony, Peadar; Meney, Revocatus; Brink, Henry; Takashina, Nao; Lin, Ming-Ching; Lin, Hsien-Yung; Rust, Niki; Salmo, Severino G; Watson, James E M; Kahumbu, Paula; Maron, Martine; Possingham, Hugh P; Biggs, Duan
2018-03-01
Recent increases in ivory poaching have depressed African elephant populations. Successful enforcement has led to ivory stockpiling. Stockpile destruction is becoming increasingly popular, and most destruction has occurred in the last 5 years. Ivory destruction is intended to send a strong message against ivory consumption, both in promoting a taboo on ivory use and catalyzing policy change. However, there has been no effort to establish the distribution and extent of media reporting on ivory destruction events globally. We analyzed media coverage of the largest ivory destruction event in history (Kenya, 30 April 2016) across 11 nation states connected to ivory trade. We used an online-media crawling tool to search online media outlets and subjected 5 of the largest print newspapers (by circulation) in 5 nations of interest to content analysis. Most online news on the ivory burn came from the United States (81% of 1944 articles), whereas most of the print news articles came from Kenya (61% of 157 articles). Eighty-six to 97% of all online articles reported the burn as a positive conservation action, whereas 4-50% discussed ivory burning as having a negative impact on elephant conservation. Most articles discussed law enforcement and trade bans as effective for elephant conservation. There was more relative search interest globally in the 2016 Kenyan ivory burn than any other burn in 5 years. Ours is the first attempt to track the reach of media coverage relative to an ivory burn and provides a case study in tracking the effects of a conservation-marketing event. © 2018 Society for Conservation Biology.
2014-01-01
Background Optimization is the key to solving many problems in computational biology. Global optimization methods, which provide a robust methodology, and metaheuristics in particular have proven to be the most efficient methods for many applications. Despite their utility, there is a limited availability of metaheuristic tools. Results We present MEIGO, an R and Matlab optimization toolbox (also available in Python via a wrapper of the R version), that implements metaheuristics capable of solving diverse problems arising in systems biology and bioinformatics. The toolbox includes the enhanced scatter search method (eSS) for continuous nonlinear programming (cNLP) and mixed-integer programming (MINLP) problems, and variable neighborhood search (VNS) for Integer Programming (IP) problems. Additionally, the R version includes BayesFit for parameter estimation by Bayesian inference. The eSS and VNS methods can be run on a single-thread or in parallel using a cooperative strategy. The code is supplied under GPLv3 and is available at http://www.iim.csic.es/~gingproc/meigo.html. Documentation and examples are included. The R package has been submitted to BioConductor. We evaluate MEIGO against optimization benchmarks, and illustrate its applicability to a series of case studies in bioinformatics and systems biology where it outperforms other state-of-the-art methods. Conclusions MEIGO provides a free, open-source platform for optimization that can be applied to multiple domains of systems biology and bioinformatics. It includes efficient state of the art metaheuristics, and its open and modular structure allows the addition of further methods. PMID:24885957
Tong, Shilu; Dale, Pat; Nicholls, Neville; Mackenzie, John S; Wolff, Rodney; McMichael, Anthony J
2008-12-01
Arbovirus diseases have emerged as a global public health concern. However, the impact of climatic, social, and environmental variability on the transmission of arbovirus diseases remains to be determined. Our goal for this study was to provide an overview of research development and future research directions about the interrelationship between climate variability, social and environmental factors, and the transmission of Ross River virus (RRV), the most common and widespread arbovirus disease in Australia. We conducted a systematic literature search on climatic, social, and environmental factors and RRV disease. Potentially relevant studies were identified from a series of electronic searches. The body of evidence revealed that the transmission cycles of RRV disease appear to be sensitive to climate and tidal variability. Rainfall, temperature, and high tides were among major determinants of the transmission of RRV disease at the macro level. However, the nature and magnitude of the interrelationship between climate variability, mosquito density, and the transmission of RRV disease varied with geographic area and socioenvironmental condition. Projected anthropogenic global climatic change may result in an increase in RRV infections, and the key determinants of RRV transmission we have identified here may be useful in the development of an early warning system. The analysis indicates that there is a complex relationship between climate variability, social and environmental factors, and RRV transmission. Different strategies may be needed for the control and prevention of RRV disease at different levels. These research findings could be used as an additional tool to support decision making in disease control/surveillance and risk management.
A Novel Web Application to Analyze and Visualize Extreme Heat Events
NASA Astrophysics Data System (ADS)
Li, G.; Jones, H.; Trtanj, J.
2016-12-01
Extreme heat is the leading cause of weather-related deaths in the United States annually and is expected to increase with our warming climate. However, most of these deaths are preventable with proper tools and services to inform the public about heat waves. In this project, we have investigated the key indicators of a heat wave, the vulnerable populations, and the data visualization strategies of how those populations most effectively absorb heat wave data. A map-based web app has been created that allows users to search and visualize historical heat waves in the United States incorporating these strategies. This app utilizes daily maximum temperature data from NOAA Global Historical Climatology Network which contains about 2.7 million data points from over 7,000 stations per year. The point data are spatially aggregated into county-level data using county geometry from US Census Bureau and stored in Postgres database with PostGIS spatial capability. GeoServer, a powerful map server, is used to serve the image and data layers (WMS and WFS). The JavaScript-based web-mapping platform Leaflet is used to display the temperature layers. A number of functions have been implemented for the search and display. Users can search for extreme heat events by county or by date. The "by date" option allows a user to select a date and a Tmax threshold which then highlights all of the areas on the map that meet those date and temperature parameters. The "by county" option allows the user to select a county on the map which then retrieves a list of heat wave dates and daily Tmax measurements. This visualization is clean, user-friendly, and novel because while this sort of time, space, and temperature measurements can be found by querying meteorological datasets, there does not exist a tool that neatly packages this information together in an easily accessible and non-technical manner, especially in a time where climate change urges a better understanding of heat waves.
RISK FACTORS FOR PRESSURE INJURIES AMONG CRITICAL-CARE PATIENTS: A SYSTEMATIC REVIEW
Alderden, Jenny; Rondinelli, June; Pepper, Ginette; Cummins, Mollie; Whitney, JoAnne
2017-01-01
Objective To identify risk factors independently predictive of pressure injury (also known as pressure ulcer) development among critical-care patients Design We undertook a systematic review of primary research based on standardized criteria set forth by the Institute of Medicine. Data Sources We searched the following databases: CINAHL (EBSCOhost), the Cochrane Library (Wilson), Dissertations & Theses Global (ProQuest), PubMed (National Library of Medicine), and Scopus. There was no language restriction. Method A research librarian coordinated the search strategy. Articles that potentially met inclusion criteria were screened by two investigators. Among the articles that met selection criteria, one investigator extracted data and a second investigator reviewed the data for accuracy. Based on a literature search, we developed a tool for assessing study quality using a combination of currently available tools and expert input. We used the method developed by Coleman and colleagues in 2014 to generate evidence tables and a summary narrative synthesis by domain and subdomain. Results Of 1753 abstracts reviewed, 158 were identified as potentially eligible and 18 fulfilled eligibility criteria. Five studies were classified as high quality, two were moderate quality, nine were low quality, and two were of very low quality. Age, mobility/activity, perfusion, and vasopressor infusion emerged as important risk factors for pressure injury development, whereas results for risk categories that are theoretically important, including nutrition, and skin/pressure injury status, were mixed. Methodological limitations across studies limited the generalizability of the results, and future research is needed, particularly to evaluate risk conferred by altered nutrition and skin/pressure injury status, and to further elucidate the effects of perfusion-related variables. Conclusions Results underscore the importance of avoiding overinterpretation of a single study, and the importance of taking study quality into consideration when reviewing risk factors. Maximal pressure injury prevention efforts are particularly important among critical-care patients who are older, have altered mobility, experience poor perfusion, or who are receiving a vasopressor infusion. PMID:28384533
How is depression experienced around the world? A systematic review of qualitative literature
Haroz, E.E.; Ritchey, M.; Bass, J.K.; Kohrt, B.A.; Augustinavicius, J.; Michalopoulos, L.; Burkey, M.D.; Bolton, P.
2017-01-01
To date global research on depression has used assessment tools based on research and clinical experience drawn from Western populations (i.e., in North American, European and Australian). There may be features of depression in non-Western populations which are not captured in current diagnostic criteria or measurement tools, as well as criteria for depression that are not relevant in other regions. We investigated this possibility through a systematic review of qualitative studies of depression worldwide. Nine online databases were searched for records that used qualitative methods to study depression. Initial searches were conducted between August 2012 and December 2012; an updated search was repeated in June of 2015 to include relevant literature published between December 30, 2012 and May 30, 2015. No date limits were set for inclusion of articles. A total of 16,130 records were identified and 138 met full inclusion criteria. Included studies were published between 1976 and 2015. These 138 studies represented data on 170 different study populations (some reported on multiple samples) and 77 different nationalities/ethnicities. Variation in results by geographical region, gender, and study context were examined to determine the consistency of descriptions across populations. Fisher’s exact tests were used to compare frequencies of features across region, gender and context. Seven of the 15 features with the highest relative frequency form part of the DSM-5 diagnosis of Major Depressive Disorder (MDD). However, many of the other features with relatively high frequencies across the studies are associated features in the DSM, but are not prioritized as diagnostic criteria and therefore not included in standard instruments. The DSM-5 diagnostic criteria of problems with concentration and psychomotor agitation or slowing were infrequently mentioned. This research suggests that the DSM model and standard instruments currently based on the DSM may not adequately reflect the experience of depression at the worldwide or regional levels. PMID:28069271
How is depression experienced around the world? A systematic review of qualitative literature.
Haroz, E E; Ritchey, M; Bass, J K; Kohrt, B A; Augustinavicius, J; Michalopoulos, L; Burkey, M D; Bolton, P
2017-06-01
To date global research on depression has used assessment tools based on research and clinical experience drawn from Western populations (i.e., in North American, European and Australian). There may be features of depression in non-Western populations which are not captured in current diagnostic criteria or measurement tools, as well as criteria for depression that are not relevant in other regions. We investigated this possibility through a systematic review of qualitative studies of depression worldwide. Nine online databases were searched for records that used qualitative methods to study depression. Initial searches were conducted between August 2012 and December 2012; an updated search was repeated in June of 2015 to include relevant literature published between December 30, 2012 and May 30, 2015. No date limits were set for inclusion of articles. A total of 16,130 records were identified and 138 met full inclusion criteria. Included studies were published between 1976 and 2015. These 138 studies represented data on 170 different study populations (some reported on multiple samples) and 77 different nationalities/ethnicities. Variation in results by geographical region, gender, and study context were examined to determine the consistency of descriptions across populations. Fisher's exact tests were used to compare frequencies of features across region, gender and context. Seven of the 15 features with the highest relative frequency form part of the DSM-5 diagnosis of Major Depressive Disorder (MDD). However, many of the other features with relatively high frequencies across the studies are associated features in the DSM, but are not prioritized as diagnostic criteria and therefore not included in standard instruments. The DSM-5 diagnostic criteria of problems with concentration and psychomotor agitation or slowing were infrequently mentioned. This research suggests that the DSM model and standard instruments currently based on the DSM may not adequately reflect the experience of depression at the worldwide or regional levels. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Morton, J. J.; Carbotte, S. M.
2016-02-01
The Marine Geoscience Data System (MGDS: www.marine-geo.org) provides a suite of tools and services for free public access to data acquired throughout the global oceans including maps, grids, near-bottom photos, and geologic interpretations that are essential for habitat characterization and marine spatial planning. Users can explore, discover, and download data through a combination of APIs and front-end interfaces that include dynamic service-driven maps, a geospatially enabled search engine, and an easy to navigate user interface for browsing and discovering related data. MGDS offers domain-specific data curation with a team of scientists and data specialists who utilize a suite of back-end tools for introspection of data files and metadata assembly to verify data quality and ensure that data are well-documented for long-term preservation and re-use. Funded by the NSF as part of the multi-disciplinary IEDA Data Facility, MGDS also offers Data DOI registration and links between data and scientific publications. MGDS produces and curates the Global Multi-Resolution Topography Synthesis (GMRT: gmrt.marine-geo.org), a continuously updated Digital Elevation Model that seamlessly integrates multi-resolutional elevation data from a variety of sources including the GEBCO 2014 ( 1 km resolution) and International Bathymetric Chart of the Southern Ocean ( 500 m) compilations. A significant component of GMRT includes ship-based multibeam sonar data, publicly available through NOAA's National Centers for Environmental Information, that are cleaned and quality controlled by the MGDS Team and gridded at their full spatial resolution (typically 100 m resolution in the deep sea). Additional components include gridded bathymetry products contributed by individual scientists (up to meter scale resolution in places), publicly accessible regional bathymetry, and high-resolution terrestrial elevation data. New data are added to GMRT on an ongoing basis, with two scheduled releases per year. GMRT is available as both gridded data and images that can be viewed and downloaded directly through the Java application GeoMapApp (www.geomapapp.org) and the web-based GMRT MapTool. In addition, the GMRT GridServer API provides programmatic access to grids, imagery, profiles, and single point elevation values.
Testing search strategies for systematic reviews in the Medline literature database through PubMed.
Volpato, Enilze S N; Betini, Marluci; El Dib, Regina
2014-04-01
A high-quality electronic search is essential in ensuring accuracy and completeness in retrieved records for the conducting of a systematic review. We analysed the available sample of search strategies to identify the best method for searching in Medline through PubMed, considering the use or not of parenthesis, double quotation marks, truncation and use of a simple search or search history. In our cross-sectional study of search strategies, we selected and analysed the available searches performed during evidence-based medicine classes and in systematic reviews conducted in the Botucatu Medical School, UNESP, Brazil. We analysed 120 search strategies. With regard to the use of phrase searches with parenthesis, there was no difference between the results with and without parenthesis and simple searches or search history tools in 100% of the sample analysed (P = 1.0). The number of results retrieved by the searches analysed was smaller using double quotations marks and using truncation compared with the standard strategy (P = 0.04 and P = 0.08, respectively). There is no need to use phrase-searching parenthesis to retrieve studies; however, we recommend the use of double quotation marks when an investigator attempts to retrieve articles in which a term appears to be exactly the same as what was proposed in the search form. Furthermore, we do not recommend the use of truncation in search strategies in the Medline via PubMed. Although the results of simple searches or search history tools were the same, we recommend using the latter.
ERIC Educational Resources Information Center
Isakson, Carol
2004-01-01
Search engines rapidly add new services and experimental tools in trying to outmaneuver each other for customers. In this article, the author describes the latest additional services of some search engines and provides its sources. The author also suggests tips for using these new search upgrades.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-18
... search (advanced search) engine or the ADAMS ``Find'' tool in Citrix. The Westinghouse AP1000 DCD, which... local residents at the South Dade Regional Library and the Homestead Branch Library. To search for...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-08
... either the Web-based search (advanced search) engine or the ADAMS find tool in Citrix. Within 30 days.... To search for other related documents in ADAMS using the Watts Bar Nuclear Plant Unit 2 OL...
NASA Astrophysics Data System (ADS)
Lihoreau, Mathieu; Ings, Thomas C.; Chittka, Lars; Reynolds, Andy M.
2016-07-01
Simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a search space. It is frequently implemented in computers working on complex optimization problems but until now has not been directly observed in nature as a searching strategy adopted by foraging animals. We analysed high-speed video recordings of the three-dimensional searching flights of bumblebees (Bombus terrestris) made in the presence of large or small artificial flowers within a 0.5 m3 enclosed arena. Analyses of the three-dimensional flight patterns in both conditions reveal signatures of simulated annealing searches. After leaving a flower, bees tend to scan back-and forth past that flower before making prospecting flights (loops), whose length increases over time. The search pattern becomes gradually more expansive and culminates when another rewarding flower is found. Bees then scan back and forth in the vicinity of the newly discovered flower and the process repeats. This looping search pattern, in which flight step lengths are typically power-law distributed, provides a relatively simple yet highly efficient strategy for pollinators such as bees to find best quality resources in complex environments made of multiple ephemeral feeding sites with nutritionally variable rewards.
Hashimoto, Shunji; Takazawa, Yoshikatsu; Fushimi, Akihiro; Tanabe, Kiyoshi; Shibata, Yasuyuki; Ieda, Teruyo; Ochiai, Nobuo; Kanda, Hirooki; Ohura, Takeshi; Tao, Qingping; Reichenbach, Stephen E
2011-06-17
We successfully detected halogenated compounds from several kinds of environmental samples by using a comprehensive two-dimensional gas chromatograph coupled with a tandem mass spectrometer (GC×GC-MS/MS). For the global detection of organohalogens, fly ash sample extracts were directly measured without any cleanup process. The global and selective detection of halogenated compounds was achieved by neutral loss scans of chlorine, bromine and/or fluorine using an MS/MS. It was also possible to search for and identify compounds using two-dimensional mass chromatograms and mass profiles obtained from measurements of the same sample with a GC×GC-high resolution time-of-flight mass spectrometer (HRTofMS) under the same conditions as those used for the GC×GC-MS/MS. In this study, novel software tools were also developed to help find target (halogenated) compounds in the data provided by a GC×GC-HRTofMS. As a result, many dioxin and polychlorinated biphenyl congeners and many other halogenated compounds were found in fly ash extract and sediment samples. By extracting the desired information, which concerned organohalogens in this study, from huge quantities of data with the GC×GC-HRTofMS, we reveal the possibility of realizing the total global detection of compounds with one GC measurement of a sample without any pre-treatment. Copyright © 2011 Elsevier B.V. All rights reserved.
Global reach of direct-to-consumer advertising using social media for illicit online drug sales.
Mackey, Tim Ken; Liang, Bryan A
2013-05-29
Illicit or rogue Internet pharmacies are a recognized global public health threat that have been identified as utilizing various forms of online marketing and promotion, including social media. To assess the accessibility of creating illicit no prescription direct-to-consumer advertising (DTCA) online pharmacy social media marketing (eDTCA2.0) and evaluate its potential global reach. We identified the top 4 social media platforms allowing eDTCA2.0. After determining applicable platforms (ie, Facebook, Twitter, Google+, and MySpace), we created a fictitious advertisement advertising no prescription drugs online and posted it to the identified social media platforms. Each advertisement linked to a unique website URL that consisted of a site error page. Employing Web search analytics, we tracked the number of users visiting these sites and their location. We used commercially available Internet tools and services, including website hosting, domain registration, and website analytic services. Illicit online pharmacy social media content for Facebook, Twitter, and MySpace remained accessible despite highly questionable and potentially illegal content. Fictitious advertisements promoting illicit sale of drugs generated aggregate unique user traffic of 2795 visits over a 10-month period. Further, traffic to our websites originated from a number of countries, including high-income and middle-income countries, and emerging markets. Our results indicate there are few barriers to entry for social media-based illicit online drug marketing. Further, illicit eDTCA2.0 has globalized outside US borders to other countries through unregulated Internet marketing.
Search | The University of Virginia
Menu Search The University of Virginia Main menu Life at UVA Start Here Affording UVA Residence Life Logo Life at UVA Academics Arts Athletics Global Health & Medicine Research Schools Libraries Visit
Introducing Products to DoD Using Specifications and Standards
2011-08-18
to utilize the Product Introduction Tool. Search ~Favorites .S » Links ~Customize Links ~ EDS-NMCI ~Free Hotmail Product Introduction Process User...the Product Introduction Tool. Search ~Favorites .S » Links ~Customize Links ~ EDS-NMCI ~Free Hotmail Product Introduction Process User Pol icy...Links i1 EDS-NMCI ~ Free Hotmail i] I] Go ldentitify Categories/Subcategories Identify the category/subcategory that most closely covers your
ERIC Educational Resources Information Center
Leibiger, Carol A.
2011-01-01
Googlitis, the overreliance on search engines for research and the resulting development of poor searching skills, is a recognized problem among today's students. Google is not an effective research tool because, in addition to encouraging keyword searching at the expense of more powerful subject searching, it only accesses the Surface Web and is…
Promising Practices in Instruction of Discovery Tools
ERIC Educational Resources Information Center
Buck, Stefanie; Steffy, Christina
2013-01-01
Libraries are continually changing to meet the needs of users; this includes implementing discovery tools, also referred to as web-scale discovery tools, to make searching library resources easier. Because these tools are so new, it is difficult to establish definitive best practices for teaching these tools; however, promising practices are…
Educational quality of YouTube videos on knee arthrocentesis.
Fischer, Jonas; Geurts, Jeroen; Valderrabano, Victor; Hügle, Thomas
2013-10-01
Knee arthrocentesis is a commonly performed diagnostic and therapeutic procedure in rheumatology and orthopedic surgery. Classic teaching of arthrocentesis skills relies on hands-on practice under supervision. Video-based online teaching is an increasingly utilized educational tool in higher and clinical education. YouTube is a popular video-sharing Web site that can be accessed as a teaching source. The objective of this study was to assess the educational value of YouTube videos on knee arthrocentesis posted by health professionals and institutions during the period from 2008 to 2012. The YouTube video database was systematically searched using 5 search terms related to knee arthrocentesis. Two independent clinical reviewers assessed videos for procedural technique and educational value using a 5-point global score, ranging from 1 = poor quality to 5 = excellent educational quality. As validated international guidelines are lacking, we used the guidelines of the Swiss Society of Rheumatology as criterion standard for the procedure. Of more than thousand findings, 13 videos met the inclusion criteria. Of those, 2 contained additional animated video material: one was purely animated, and one was a check list. The average length was 3.31 ± 2.28 minutes. The most popular video had 1388 hits per month. Our mean global score for educational value was 3.1 ± 1.0. Eight videos (62 %) were considered useful for teaching purposes. Use of a "no-touch" procedure, meaning that once disinfected the skin remains untouched before needle penetration, was present in all videos. Six videos (46%) demonstrated full sterile conditions. There was no clear preference of a medial (n = 8) versus lateral (n = 5) approach. A discreet number of YouTube videos on knee arthrocentesis appeared to be suitable for application in a Web-based format for medical students, fellows, and residents. The low-average mean global score for overall educational value suggests an improvement of future video-based instructional materials on YouTube would be necessary before regular use for teaching could be recommended.
Bespalova, Nadejda; Morgan, Juliet; Coverdale, John
2016-02-01
Because training residents and faculty to identify human trafficking victims is a major public health priority, the authors review existing assessment tools. PubMed and Google were searched using combinations of search terms including human, trafficking, sex, labor, screening, identification, and tool. Nine screening tools that met the inclusion criteria were found. They varied greatly in length, format, target demographic, supporting resources, and other parameters. Only two tools were designed specifically for healthcare providers. Only one tool was formally assessed to be valid and reliable in a pilot project in trafficking victim service organizations, although it has not been validated in the healthcare setting. This toolbox should facilitate the education of resident physicians and faculty in screening for trafficking victims, assist educators in assessing screening skills, and promote future research on the identification of trafficking victims.
Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter
2013-01-01
Background The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. Objective This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). Methods A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. Results The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. Conclusions The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion. PMID:23981848
NASA Astrophysics Data System (ADS)
Olsen, M. J.; Leshchinsky, B. A.; Tanyu, B. F.
2014-12-01
Landslides are a global natural hazard, resulting in severe economic, environmental and social impacts every year. Often, landslides occur in areas of repeated slope instability, but despite these trends, significant residential developments and critical infrastructure are built in the shadow of past landslide deposits and marginally stable slopes. These hazards, despite their sometimes enormous scale and regional propensity, however, are difficult to detect on the ground, often due to vegetative cover. However, new developments in remote sensing technology, specifically Light Detection and Ranging mapping (LiDAR) are providing a new means of viewing our landscape. Airborne LiDAR, combined with a level of post-processing, enable the creation of spatial data representative of the earth beneath the vegetation, highlighting the scars of unstable slopes of the past. This tool presents a revolutionary technique to mapping landslide deposits and their associated regions of risk; yet, their inventorying is often done manually, an approach that can be tedious, time-consuming and subjective. However, the associated LiDAR bare earth data present the opportunity to use this remote sensing technology and typical landslide geometry to create an automated algorithm that can detect and inventory deposits on a landscape scale. This algorithm, called the Contour Connection Method (CCM), functions by first detecting steep gradients, often associated with the headscarp of a failed hillslope, and initiating a search, highlighting deposits downslope of the failure. Based on input of search gradients, CCM can assist in highlighting regions identified as landslides consistently on a landscape scale, capable of mapping more than 14,000 hectares rapidly (<30 minutes). CCM has shown preliminary agreement with manual landslide inventorying in Oregon's Coast Range, realizing almost 90% agreement with inventorying performed by a trained geologist. The global threat of landslides necessitates new and effective tools for inventorying regions of risk to protect people, infrastructure and the environment from landslide hazards. Use of the CCM algorithm combined with judgment and rapidly developing remote sensing technology may help better define these regions of risk.
Health Information on Internet: Quality, Importance, and Popularity of Persian Health Websites
Samadbeik, Mahnaz; Ahmadi, Maryam; Mohammadi, Ali; Mohseni Saravi, Beniamin
2014-01-01
Background: The Internet has provided great opportunities for disseminating both accurate and inaccurate health information. Therefore, the quality of information is considered as a widespread concern affecting the human life. Despite the increasingly substantial growth in the number of users, Persian health websites and the proportion of internet-using patients, little is known about the quality of Persian medical and health websites. Objectives: The current study aimed to first assess the quality, popularity and importance of websites providing Persian health-related information, and second to evaluate the correlation of the popularity and importance ranking with quality score on the Internet. Materials and Methods: The sample websites were identified by entering the health-related keywords into four most popular search engines of Iranian users based on the Alexa ranking at the time of study. Each selected website was assessed using three qualified tools including the Bomba and Land Index, Google PageRank and the Alexa ranking. Results: The evaluated sites characteristics (ownership structure, database, scope and objective) really did not have an effect on the Alexa traffic global rank, Alexa traffic rank in Iran, Google PageRank and Bomba total score. Most websites (78.9 percent, n = 56) were in the moderate category (8 ≤ x ≤ 11.99) based on their quality levels. There was no statistically significant association between Google PageRank with Bomba index variables and Alexa traffic global rank (P > 0.05). Conclusions: The Persian health websites had better Bomba quality scores in availability and usability guidelines as compared to other guidelines. The Google PageRank did not properly reflect the real quality of evaluated websites and Internet users seeking online health information should not merely rely on it for any kind of prejudgment regarding Persian health websites. However, they can use Iran Alexa rank as a primary filtering tool of these websites. Therefore, designing search engines dedicated to explore accredited Persian health-related Web sites can be an effective method to access high-quality Persian health websites. PMID:24910795
Open Source Tools for Assessment of Global Water Availability, Demands, and Scarcity
NASA Astrophysics Data System (ADS)
Li, X.; Vernon, C. R.; Hejazi, M. I.; Link, R. P.; Liu, Y.; Feng, L.; Huang, Z.; Liu, L.
2017-12-01
Water availability and water demands are essential factors for estimating water scarcity conditions. To reproduce historical observations and to quantify future changes in water availability and water demand, two open source tools have been developed by the JGCRI (Joint Global Change Research Institute): Xanthos and GCAM-STWD. Xanthos is a gridded global hydrologic model, designed to quantify and analyze water availability in 235 river basins. Xanthos uses a runoff generation and a river routing modules to simulate both historical and future estimates of total runoff and streamflows on a monthly time step at a spatial resolution of 0.5 degrees. GCAM-STWD is a spatiotemporal water disaggregation model used with the Global Change Assessment Model (GCAM) to spatially downscale global water demands for six major enduse sectors (irrigation, domestic, electricity generation, mining, and manufacturing) from the region scale to the scale of 0.5 degrees. GCAM-STWD then temporally downscales the gridded annual global water demands to monthly results. These two tools, written in Python, can be integrated to assess global, regional or basin-scale water scarcity or water stress. Both of the tools are extensible to ensure flexibility and promote contribution from researchers that utilize GCAM and study global water use and supply.
On a numerical solving of random generated hexamatrix games
NASA Astrophysics Data System (ADS)
Orlov, Andrei; Strekalovskiy, Alexander
2016-10-01
In this paper, we develop a global search method for finding a Nash equilibrium in a hexamatrix game (polymatrix game of three players). The method, on the one hand, is based on the equivalence theorem of the problem of finding a Nash equilibrium in the game and a special mathematical optimization problem, and, on the other hand, on the usage of Global Search Theory for solving the latter problem. The efficiency of this approach is demonstrated by the results of computational testing.
Search Analytics: Automated Learning, Analysis, and Search with Open Source
NASA Astrophysics Data System (ADS)
Hundman, K.; Mattmann, C. A.; Hyon, J.; Ramirez, P.
2016-12-01
The sheer volume of unstructured scientific data makes comprehensive human analysis impossible, resulting in missed opportunities to identify relationships, trends, gaps, and outliers. As the open source community continues to grow, tools like Apache Tika, Apache Solr, Stanford's DeepDive, and Data-Driven Documents (D3) can help address this challenge. With a focus on journal publications and conference abstracts often in the form of PDF and Microsoft Office documents, we've initiated an exploratory NASA Advanced Concepts project aiming to use the aforementioned open source text analytics tools to build a data-driven justification for the HyspIRI Decadal Survey mission. We call this capability Search Analytics, and it fuses and augments these open source tools to enable the automatic discovery and extraction of salient information. In the case of HyspIRI, a hyperspectral infrared imager mission, key findings resulted from the extractions and visualizations of relationships from thousands of unstructured scientific documents. The relationships include links between satellites (e.g. Landsat 8), domain-specific measurements (e.g. spectral coverage) and subjects (e.g. invasive species). Using the above open source tools, Search Analytics mined and characterized a corpus of information that would be infeasible for a human to process. More broadly, Search Analytics offers insights into various scientific and commercial applications enabled through missions and instrumentation with specific technical capabilities. For example, the following phrases were extracted in close proximity within a publication: "In this study, hyperspectral images…with high spatial resolution (1 m) were analyzed to detect cutleaf teasel in two areas. …Classification of cutleaf teasel reached a users accuracy of 82 to 84%." Without reading a single paper we can use Search Analytics to automatically identify that a 1 m spatial resolution provides a cutleaf teasel detection users accuracy of 82-84%, which could have tangible, direct downstream implications for crop protection. Automatically assimilating this information expedites and supplements human analysis, and, ultimately, Search Analytics and its foundation of open source tools will result in more efficient scientific investment and research.
NASA Astrophysics Data System (ADS)
Sarni, W.
2017-12-01
Water scarcity and poor quality impacts economic development, business growth, and social well-being. Water has become, in our generation, the foremost critical local, regional, and global issue of our time. Despite these needs, there is no water hub or water technology accelerator solely dedicated to water data and tools. There is a need by the public and private sectors for vastly improved data management and visualization tools. This is the WetDATA opportunity - to develop a water data tech hub dedicated to water data acquisition, analytics, and visualization tools for informed policy and business decisions. WetDATA's tools will help incubate disruptive water data technologies and accelerate adoption of current water data solutions. WetDATA is a Colorado-based (501c3), global hub for water data analytics and technology innovation. WetDATA's vision is to be a global leader in water information, data technology innovation and collaborate with other US and global water technology hubs. ROADMAP * Portal (www.wetdata.org) to provide stakeholders with tools/resources to understand related water risks. * The initial activities will provide education, awareness and tools to stakeholders to support the implementation of the Colorado State Water Plan. * Leverage the Western States Water Council Water Data Exchange database. * Development of visualization, predictive analytics and AI tools to engage with stakeholders and provide actionable data and information. TOOLS Education: Provide information on water issues and risks at the local, state, national and global scale. Visualizations: Development of data analytics and visualization tools based upon the 2030 Water Resources Group methodology to support the implementation of the Colorado State Water Plan. Predictive Analytics: Accessing publically available water databases and using machine learning to develop water availability forecasting tools, and time lapse images to support city / urban planning.
Formative evaluation of a patient-specific clinical knowledge summarization tool
Del Fiol, Guilherme; Mostafa, Javed; Pu, Dongqiuye; Medlin, Richard; Slager, Stacey; Jonnalagadda, Siddhartha R.; Weir, Charlene R.
2015-01-01
Objective To iteratively design a prototype of a computerized clinical knowledge summarization (CKS) tool aimed at helping clinicians finding answers to their clinical questions; and to conduct a formative assessment of the usability, usefulness, efficiency, and impact of the CKS prototype on physicians’ perceived decision quality compared with standard search of UpToDate and PubMed. Materials and methods Mixed-methods observations of the interactions of 10 physicians with the CKS prototype vs. standard search in an effort to solve clinical problems posed as case vignettes. Results The CKS tool automatically summarizes patient-specific and actionable clinical recommendations from PubMed (high quality randomized controlled trials and systematic reviews) and UpToDate. Two thirds of the study participants completed 15 out of 17 usability tasks. The median time to task completion was less than 10 s for 12 of the 17 tasks. The difference in search time between the CKS and standard search was not significant (median = 4.9 vs. 4.5 min). Physician’s perceived decision quality was significantly higher with the CKS than with manual search (mean = 16.6 vs. 14.4; p = 0.036). Conclusions The CKS prototype was well-accepted by physicians both in terms of usability and usefulness. Physicians perceived better decision quality with the CKS prototype compared to standard search of PubMed and UpToDate within a similar search time. Due to the formative nature of this study and a small sample size, conclusions regarding efficiency and efficacy are exploratory. PMID:26612774
Formative evaluation of a patient-specific clinical knowledge summarization tool.
Del Fiol, Guilherme; Mostafa, Javed; Pu, Dongqiuye; Medlin, Richard; Slager, Stacey; Jonnalagadda, Siddhartha R; Weir, Charlene R
2016-02-01
To iteratively design a prototype of a computerized clinical knowledge summarization (CKS) tool aimed at helping clinicians finding answers to their clinical questions; and to conduct a formative assessment of the usability, usefulness, efficiency, and impact of the CKS prototype on physicians' perceived decision quality compared with standard search of UpToDate and PubMed. Mixed-methods observations of the interactions of 10 physicians with the CKS prototype vs. standard search in an effort to solve clinical problems posed as case vignettes. The CKS tool automatically summarizes patient-specific and actionable clinical recommendations from PubMed (high quality randomized controlled trials and systematic reviews) and UpToDate. Two thirds of the study participants completed 15 out of 17 usability tasks. The median time to task completion was less than 10s for 12 of the 17 tasks. The difference in search time between the CKS and standard search was not significant (median=4.9 vs. 4.5m in). Physician's perceived decision quality was significantly higher with the CKS than with manual search (mean=16.6 vs. 14.4; p=0.036). The CKS prototype was well-accepted by physicians both in terms of usability and usefulness. Physicians perceived better decision quality with the CKS prototype compared to standard search of PubMed and UpToDate within a similar search time. Due to the formative nature of this study and a small sample size, conclusions regarding efficiency and efficacy are exploratory. Published by Elsevier Ireland Ltd.
Research on particle swarm optimization algorithm based on optimal movement probability
NASA Astrophysics Data System (ADS)
Ma, Jianhong; Zhang, Han; He, Baofeng
2017-01-01
The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.
Skip to content HOME NEWS USERS OpenFrescoExpress OpenFresco Examples & Tools Feedback staff and research students learning about hybrid simulation and starting to use this experimental the Pacific Earthquake Engineering Research Center (PEER) and others. Search Search for: Search Menu
Camuso, Natasha; Bajaj, Prerna; Dudgeon, Deborah; Mitera, Gunita
2016-08-01
Tools to collect patient-reported outcome measures (PROMs) are frequently used in the healthcare setting to collect information that is most meaningful to patients. Due to discordance among how patients and healthcare providers rank symptoms that are considered most meaningful to the patient, engagement of patients in the development of PROMs is extremely important. This review aimed to identify studies that described how patients are involved in the item generation stage of cancer-specific PROM tools developed for cancer patients. A literature search was conducted using keywords relevant to PROMs, cancer, and patient engagement. A manual search of relevant reference lists was also conducted. Inclusion criteria stipulated that publications must describe patient engagement in the item generation stage of development of cancer-specific PROM tools. Results were excluded if they were duplicate findings or non-English. The initial search yielded 230 publications. After removal of duplicates and review of publications, 6 were deemed relevant. Fourteen additional publications were retrieved through a manual search of references from relevant publications. A total of 13 unique PROM tools that included patient input in item generation were identified. The most common method of patient engagement was through qualitative interviews or focus groups. Despite recommendations from international groups and the emphasized importance of incorporating patient feedback in all stages of development of PROMs, few unique tools have incorporated patient input in item generation of cancer-specific tools. Moving forward, a framework of best practices on how to best engage patients in developing PROMs is warranted to support high-quality patient-centered care.
National Centers for Environmental Prediction
: Influence of convective parameterization on the systematic errors of Climate Forecast System (CFS) model ; Climate Dynamics, 41, 45-61, 2013. Saha, S., S. Pokhrel and H. S. Chaudhari : Influence of Eurasian snow Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather
NASA Astrophysics Data System (ADS)
Desconnets, Jean-Christophe; Giuliani, Gregory; Guigoz, Yaniss; Lacroix, Pierre; Mlisa, Andiswa; Noort, Mark; Ray, Nicolas; Searby, Nancy D.
2017-02-01
The discovery of and access to capacity building resources are often essential to conduct environmental projects based on Earth Observation (EO) resources, whether they are Earth Observation products, methodological tools, techniques, organizations that impart training in these techniques or even projects that have shown practical achievements. Recognizing this opportunity and need, the European Commission through two FP7 projects jointly with the Group on Earth Observations (GEO) teamed up with the Committee on Earth observation Satellites (CEOS). The Global Earth Observation CApacity Building (GEOCAB) portal aims at compiling all current capacity building efforts on the use of EO data for societal benefits into an easily updateable and user-friendly portal. GEOCAB offers a faceted search to improve user discovery experience with a fully interactive world map with all inventoried projects and activities. This paper focuses on the conceptual framework used to implement the underlying platform. An ISO19115 metadata model associated with a terminological repository are the core elements that provide a semantic search application and an interoperable discovery service. The organization and the contribution of different user communities to ensure the management and the update of the content of GEOCAB are addressed.
A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.
Halloran, John T; Rocke, David M
2018-05-04
Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .
The Effectiveness of Aromatherapy for Depressive Symptoms: A Systematic Review.
Sánchez-Vidaña, Dalinda Isabel; Ngai, Shirley Pui-Ching; He, Wanjia; Chow, Jason Ka-Wing; Lau, Benson Wui-Man; Tsang, Hector Wing-Hong
2017-01-01
Background . Depression is one of the greatest health concerns affecting 350 million people globally. Aromatherapy is a popular CAM intervention chosen by people with depression. Due to the growing popularity of aromatherapy for alleviating depressive symptoms, in-depth evaluation of the evidence-based clinical efficacy of aromatherapy is urgently needed. Purpose . This systematic review aims to provide an analysis of the clinical evidence on the efficacy of aromatherapy for depressive symptoms on any type of patients. Methods . A systematic database search was carried out using predefined search terms in 5 databases: AMED, CINHAL, CCRCT, MEDLINE, and PsycINFO. Outcome measures included scales measuring depressive symptoms levels. Results . Twelve randomized controlled trials were included and two administration methods for the aromatherapy intervention including inhaled aromatherapy (5 studies) and massage aromatherapy (7 studies) were identified. Seven studies showed improvement in depressive symptoms. Limitations . The quality of half of the studies included is low, and the administration protocols among the studies varied considerably. Different assessment tools were also employed among the studies. Conclusions . Aromatherapy showed potential to be used as an effective therapeutic option for the relief of depressive symptoms in a wide variety of subjects. Particularly, aromatherapy massage showed to have more beneficial effects than inhalation aromatherapy.
Jones, Bethan M; Edwards, Richard J; Skipp, Paul J; O'Connor, C David; Iglesias-Rodriguez, M Debora
2011-06-01
Emiliania huxleyi is a unicellular marine phytoplankton species known to play a significant role in global biogeochemistry. Through the dual roles of photosynthesis and production of calcium carbonate (calcification), carbon is transferred from the atmosphere to ocean sediments. Almost nothing is known about the molecular mechanisms that control calcification, a process that is tightly regulated within the cell. To initiate proteomic studies on this important and phylogenetically remote organism, we have devised efficient protein extraction protocols and developed a bioinformatics pipeline that allows the statistically robust assignment of proteins from MS/MS data using preexisting EST sequences. The bioinformatics tool, termed BUDAPEST (Bioinformatics Utility for Data Analysis of Proteomics using ESTs), is fully automated and was used to search against data generated from three strains. BUDAPEST increased the number of identifications over standard protein database searches from 37 to 99 proteins when data were amalgamated. Proteins involved in diverse cellular processes were uncovered. For example, experimental evidence was obtained for a novel type I polyketide synthase and for various photosystem components. The proteomic and bioinformatic approaches developed in this study are of wider applicability, particularly to the oceanographic community where genomic sequence data for species of interest are currently scarce.
FOAMSearch.net: A custom search engine for emergency medicine and critical care.
Raine, Todd; Thoma, Brent; Chan, Teresa M; Lin, Michelle
2015-08-01
The number of online resources read by and pertinent to clinicians has increased dramatically. However, most healthcare professionals still use mainstream search engines as their primary port of entry to the resources on the Internet. These search engines use algorithms that do not make it easy to find clinician-oriented resources. FOAMSearch, a custom search engine (CSE), was developed to find relevant, high-quality online resources for emergency medicine and critical care (EMCC) clinicians. Using Google™ algorithms, it searches a vetted list of >300 blogs, podcasts, wikis, knowledge translation tools, clinical decision support tools and medical journals. Utilisation has increased progressively to >3000 users/month since its launch in 2011. Further study of the role of CSEs to find medical resources is needed, and it might be possible to develop similar CSEs for other areas of medicine. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
PathVisio-Faceted Search: an exploration tool for multi-dimensional navigation of large pathways
Fried, Jake Y.; Luna, Augustin
2013-01-01
Purpose: The PathVisio-Faceted Search plugin helps users explore and understand complex pathways by overlaying experimental data and data from webservices, such as Ensembl BioMart, onto diagrams drawn using formalized notations in PathVisio. The plugin then provides a filtering mechanism, known as a faceted search, to find and highlight diagram nodes (e.g. genes and proteins) of interest based on imported data. The tool additionally provides a flexible scripting mechanism to handle complex queries. Availability: The PathVisio-Faceted Search plugin is compatible with PathVisio 3.0 and above. PathVisio is compatible with Windows, Mac OS X and Linux. The plugin, documentation, example diagrams and Groovy scripts are available at http://PathVisio.org/wiki/PathVisioFacetedSearchHelp. The plugin is free, open-source and licensed by the Apache 2.0 License. Contact: augustin@mail.nih.gov or jakeyfried@gmail.com PMID:23547033
A user-friendly tool for medical-related patent retrieval.
Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnyakova, Dina; Lovis, Christian; Ruch, Patrick
2012-01-01
Health-related information retrieval is complicated by the variety of nomenclatures available to name entities, since different communities of users will use different ways to name a same entity. We present in this report the development and evaluation of a user-friendly interactive Web application aiming at facilitating health-related patent search. Our tool, called TWINC, relies on a search engine tuned during several patent retrieval competitions, enhanced with intelligent interaction modules, such as chemical query, normalization and expansion. While the functionality of related article search showed promising performances, the ad hoc search results in fairly contrasted results. Nonetheless, TWINC performed well during the PatOlympics competition and was appreciated by intellectual property experts. This result should be balanced by the limited evaluation sample. We can also assume that it can be customized to be applied in corporate search environments to process domain and company-specific vocabularies, including non-English literature and patents reports.
A systematic review of substance misuse assessment packages.
Sweetman, Jennifer; Raistrick, Duncan; Mdege, Noreen D; Crosby, Helen
2013-07-01
Health-care systems globally are moving away from process measures of performance to payments for outcomes achieved. It follows that there is a need for a selection of proven quality tools that are suitable for undertaking comprehensive assessments and outcomes assessments. This review aimed to identify and evaluate existing comprehensive assessment packages. The work is part of a national program in the UK, Collaborations in Leadership of Applied Health Research and Care. Systematic searches were carried out across major databases to identify instruments designed to assess substance misuse. For those instruments identified, searches were carried out using the Cochrane Library, Embase, Ovid MEDLINE(®) and PsychINFO to identify articles reporting psychometric data. From 595 instruments, six met the inclusion criteria: Addiction Severity Index; Chemical Use, Abuse and Dependence Scale; Form 90; Maudsley Addiction Profile; Measurements in the Addictions for Triage and Evaluation; and Substance Abuse Outcomes Module. The most common reasons for exclusion were that instruments were: (i) designed for a specific substance (239); (ii) not designed for use in addiction settings (136); (iii) not providing comprehensive assessment (89); and (iv) not suitable as an outcome measure (20). The six packages are very different and suited to different uses. No package had adequate evaluation of their properties and so the emphasis should be on refining a small number of tools with very general application rather than creating new ones. An alternative to using 'off-the-shelf' packages is to create bespoke packages from well-validated, single-construct scales. [ © 2013 Australasian Professional Society on Alcohol and other Drugs.
Swellix: a computational tool to explore RNA conformational space.
Sloat, Nathan; Liu, Jui-Wen; Schroeder, Susan J
2017-11-21
The sequence of nucleotides in an RNA determines the possible base pairs for an RNA fold and thus also determines the overall shape and function of an RNA. The Swellix program presented here combines a helix abstraction with a combinatorial approach to the RNA folding problem in order to compute all possible non-pseudoknotted RNA structures for RNA sequences. The Swellix program builds on the Crumple program and can include experimental constraints on global RNA structures such as the minimum number and lengths of helices from crystallography, cryoelectron microscopy, or in vivo crosslinking and chemical probing methods. The conceptual advance in Swellix is to count helices and generate all possible combinations of helices rather than counting and combining base pairs. Swellix bundles similar helices and includes improvements in memory use and efficient parallelization. Biological applications of Swellix are demonstrated by computing the reduction in conformational space and entropy due to naturally modified nucleotides in tRNA sequences and by motif searches in Human Endogenous Retroviral (HERV) RNA sequences. The Swellix motif search reveals occurrences of protein and drug binding motifs in the HERV RNA ensemble that do not occur in minimum free energy or centroid predicted structures. Swellix presents significant improvements over Crumple in terms of efficiency and memory use. The efficient parallelization of Swellix enables the computation of sequences as long as 418 nucleotides with sufficient experimental constraints. Thus, Swellix provides a practical alternative to free energy minimization tools when multiple structures, kinetically determined structures, or complex RNA-RNA and RNA-protein interactions are present in an RNA folding problem.
Customised search and comparison of in situ, satellite and model data for ocean modellers
NASA Astrophysics Data System (ADS)
Hamre, Torill; Vines, Aleksander; Lygre, Kjetil
2014-05-01
For the ocean modelling community, the amount of available data from historical and upcoming in situ sensor networks and satellite missions, provides an rich opportunity to validate and improve their simulation models. However, the problem of making the different data interoperable and intercomparable remains, due to, among others, differences in terminology and format used by different data providers and the different granularity provided by e.g. in situ data and ocean models. The GreenSeas project (Development of global plankton data base and model system for eco-climate early warning) aims to advance the knowledge and predictive capacities of how marine ecosystems will respond to global change. In the project, one specific objective has been to improve the technology for accessing historical plankton and associated environmental data sets, along with earth observation data and simulation outputs. To this end, we have developed a web portal enabling ocean modellers to easily search for in situ or satellite data overlapping in space and time, and compare the retrieved data with their model results. The in situ data are retrieved from a geo-spatial repository containing both historical and new physical, biological and chemical parameters for the Southern Ocean, Atlantic, Nordic Seas and the Arctic. The satellite-derived quantities of similar parameters from the same areas are retrieved from another geo-spatial repository established in the project. Both repositories are accessed through standard interfaces, using the Open Geospatial Consortium (OGC) Web Map Service (WMS) and Web Feature Service (WFS), and OPeNDAP protocols, respectively. While the developed data repositories use standard terminology to describe the parameters, especially the measured in situ biological parameters are too fine grained to be immediately useful for modelling purposes. Therefore, the plankton parameters were grouped according to category, size and if available by element. This grouping was reflected in the web portal's graphical user interface, where the groups and subgroups were organized in a tree structure, enabling the modeller to quickly get an overview of available data, going into more detail (subgroups) if needed or staying at a higher level of abstraction (merging the parameters below) if this provided a better base for comparison with the model parameters. Once a suitable level of detail, as determined by the modeller, was decided, the system would retrieve available in situ parameters. The modellers could then select among the pre-defined models or upload his own model forecast file (in NetCDF/CF format), for comparison with the retrieved in situ data. The comparison can be shown in different kinds of plots (e.g. scatter plots), through simple statistical measures or near-coincident values of in situ of model points can be exported for further analysis in the modeller's own tools. During data search and presentation, the modeller can determine both query criteria and what associated metadata to include in the display and export of the retrieved data. Satellite-derived parameters can be queried and compared with model results in the same manner. With the developed prototype system, we have demonstrated that a customised tool for searching, presenting, comparing and exporting ocean data from multiple platforms (in situ, satellite, model), makes it easy to compare model results with independent observations. With further enhancement of functionality and inclusion of more data, we believe the resulting system can greatly benefit the wider community of ocean modellers looking for data and tools to validate their models.
Need a Special Tool? Make It Yourself!
ERIC Educational Resources Information Center
Mordini, Robert D.
2007-01-01
People seem to have created a tool for every purpose. If a person searches diligently, he can usually find the tool he needs. However, several things may affect this process such as time, cost of the tool, and limited tool sources. The solution to all these is to make the tool yourself. People have made tools for many thousands of years, and with…
Solar variability: Implications for global change
NASA Technical Reports Server (NTRS)
Lean, Judith; Rind, David
1994-01-01
Solar variability is examined in search of implications for global change. The topics covered include the following: solar variation modification of global surface temperature; the significance of solar variability with respect to future climate change; and methods of reducing the uncertainty of the potential amplitude of solar variability on longer time scales.
The Globalization Classroom: New Option for Becoming More Human?
ERIC Educational Resources Information Center
Svetelj, Tony
2014-01-01
Within a multi-cultural and multi-religious society, exposed to the challenges of globalization, a traditional understanding of humanism offers insufficient frameworks for an adequate comprehension of human agency, its flourishing and search for meaning. The process of globalization continuously shakes the pedagogical assumptions and principles of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowder, Jeff; Cornish, Neil J.; Reddinger, J. Lucas
This work presents the first application of the method of genetic algorithms (GAs) to data analysis for the Laser Interferometer Space Antenna (LISA). In the low frequency regime of the LISA band there are expected to be tens of thousands of galactic binary systems that will be emitting gravitational waves detectable by LISA. The challenge of parameter extraction of such a large number of sources in the LISA data stream requires a search method that can efficiently explore the large parameter spaces involved. As signals of many of these sources will overlap, a global search method is desired. GAs representmore » such a global search method for parameter extraction of multiple overlapping sources in the LISA data stream. We find that GAs are able to correctly extract source parameters for overlapping sources. Several optimizations of a basic GA are presented with results derived from applications of the GA searches to simulated LISA data.« less
Datasets2Tools, repository and search engine for bioinformatics datasets, tools and canned analyses
Torre, Denis; Krawczuk, Patrycja; Jagodnik, Kathleen M.; Lachmann, Alexander; Wang, Zichen; Wang, Lily; Kuleshov, Maxim V.; Ma’ayan, Avi
2018-01-01
Biomedical data repositories such as the Gene Expression Omnibus (GEO) enable the search and discovery of relevant biomedical digital data objects. Similarly, resources such as OMICtools, index bioinformatics tools that can extract knowledge from these digital data objects. However, systematic access to pre-generated ‘canned’ analyses applied by bioinformatics tools to biomedical digital data objects is currently not available. Datasets2Tools is a repository indexing 31,473 canned bioinformatics analyses applied to 6,431 datasets. The Datasets2Tools repository also contains the indexing of 4,901 published bioinformatics software tools, and all the analyzed datasets. Datasets2Tools enables users to rapidly find datasets, tools, and canned analyses through an intuitive web interface, a Google Chrome extension, and an API. Furthermore, Datasets2Tools provides a platform for contributing canned analyses, datasets, and tools, as well as evaluating these digital objects according to their compliance with the findable, accessible, interoperable, and reusable (FAIR) principles. By incorporating community engagement, Datasets2Tools promotes sharing of digital resources to stimulate the extraction of knowledge from biomedical research data. Datasets2Tools is freely available from: http://amp.pharm.mssm.edu/datasets2tools. PMID:29485625
Datasets2Tools, repository and search engine for bioinformatics datasets, tools and canned analyses.
Torre, Denis; Krawczuk, Patrycja; Jagodnik, Kathleen M; Lachmann, Alexander; Wang, Zichen; Wang, Lily; Kuleshov, Maxim V; Ma'ayan, Avi
2018-02-27
Biomedical data repositories such as the Gene Expression Omnibus (GEO) enable the search and discovery of relevant biomedical digital data objects. Similarly, resources such as OMICtools, index bioinformatics tools that can extract knowledge from these digital data objects. However, systematic access to pre-generated 'canned' analyses applied by bioinformatics tools to biomedical digital data objects is currently not available. Datasets2Tools is a repository indexing 31,473 canned bioinformatics analyses applied to 6,431 datasets. The Datasets2Tools repository also contains the indexing of 4,901 published bioinformatics software tools, and all the analyzed datasets. Datasets2Tools enables users to rapidly find datasets, tools, and canned analyses through an intuitive web interface, a Google Chrome extension, and an API. Furthermore, Datasets2Tools provides a platform for contributing canned analyses, datasets, and tools, as well as evaluating these digital objects according to their compliance with the findable, accessible, interoperable, and reusable (FAIR) principles. By incorporating community engagement, Datasets2Tools promotes sharing of digital resources to stimulate the extraction of knowledge from biomedical research data. Datasets2Tools is freely available from: http://amp.pharm.mssm.edu/datasets2tools.
2013-01-01
Background Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools. The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. Methods General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. Results We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. Conclusion The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary. PMID:24160679
FlavonoidSearch: A system for comprehensive flavonoid annotation by mass spectrometry.
Akimoto, Nayumi; Ara, Takeshi; Nakajima, Daisuke; Suda, Kunihiro; Ikeda, Chiaki; Takahashi, Shingo; Muneto, Reiko; Yamada, Manabu; Suzuki, Hideyuki; Shibata, Daisuke; Sakurai, Nozomu
2017-04-28
Currently, in mass spectrometry-based metabolomics, limited reference mass spectra are available for flavonoid identification. In the present study, a database of probable mass fragments for 6,867 known flavonoids (FsDatabase) was manually constructed based on new structure- and fragmentation-related rules using new heuristics to overcome flavonoid complexity. We developed the FlavonoidSearch system for flavonoid annotation, which consists of the FsDatabase and a computational tool (FsTool) to automatically search the FsDatabase using the mass spectra of metabolite peaks as queries. This system showed the highest identification accuracy for the flavonoid aglycone when compared to existing tools and revealed accurate discrimination between the flavonoid aglycone and other compounds. Sixteen new flavonoids were found from parsley, and the diversity of the flavonoid aglycone among different fruits and vegetables was investigated.
A Registry for Planetary Data Tools and Services
NASA Astrophysics Data System (ADS)
Hardman, S.; Cayanan, M.; Hughes, J. S.; Joyner, R.; Crichton, D.; Law, E.
2018-04-01
The PDS Engineering Node has upgraded a prototype Tool Registry developed by the International Planetary Data Alliance to increase the visibility and enhance functionality along with incorporating the registered tools into PDS data search results.
When being narrow minded is a good thing: locally biased people show stronger contextual cueing.
Bellaera, Lauren; von Mühlenen, Adrian; Watson, Derrick G
2014-01-01
Repeated contexts allow us to find relevant information more easily. Learning such contexts has been proposed to depend upon either global processing of the repeated contexts, or alternatively processing of the local region surrounding the target information. In this study, we measured the extent to which observers were by default biased to process towards a more global or local level. The findings showed that the ability to use context to help guide their search was strongly related to an observer's local/global processing bias. Locally biased people could use context to help improve their search better than globally biased people. The results suggest that the extent to which context can be used depends crucially on the observer's attentional bias and thus also to factors and influences that can change this bias.
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Cao, Leilei; Xu, Lihong; Goodman, Erik D.
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.
Cao, Leilei; Xu, Lihong; Goodman, Erik D
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
A novel global Harmony Search method based on Ant Colony Optimisation algorithm
NASA Astrophysics Data System (ADS)
Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi
2016-03-01
The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.
QuickVina: accelerating AutoDock Vina using gradient-based heuristics for global optimization.
Handoko, Stephanus Daniel; Ouyang, Xuchang; Su, Chinh Tran To; Kwoh, Chee Keong; Ong, Yew Soon
2012-01-01
Predicting binding between macromolecule and small molecule is a crucial phase in the field of rational drug design. AutoDock Vina, one of the most widely used docking software released in 2009, uses an empirical scoring function to evaluate the binding affinity between the molecules and employs the iterated local search global optimizer for global optimization, achieving a significantly improved speed and better accuracy of the binding mode prediction compared its predecessor, AutoDock 4. In this paper, we propose further improvement in the local search algorithm of Vina by heuristically preventing some intermediate points from undergoing local search. Our improved version of Vina-dubbed QVina-achieved a maximum acceleration of about 25 times with the average speed-up of 8.34 times compared to the original Vina when tested on a set of 231 protein-ligand complexes while maintaining the optimal scores mostly identical. Using our heuristics, larger number of different ligands can be quickly screened against a given receptor within the same time frame.
Moon Search Algorithms for NASA's Dawn Mission to Asteroid Vesta
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mcfadden, Lucy A.; Skillman, David R.; McLean, Brian; Mutchler, Max; Carsenty, Uri; Palmer, Eric E.
2012-01-01
A moon or natural satellite is a celestial body that orbits a planetary body such as a planet, dwarf planet, or an asteroid. Scientists seek understanding the origin and evolution of our solar system by studying moons of these bodies. Additionally, searches for satellites of planetary bodies can be important to protect the safety of a spacecraft as it approaches or orbits a planetary body. If a satellite of a celestial body is found, the mass of that body can also be calculated once its orbit is determined. Ensuring the Dawn spacecraft's safety on its mission to the asteroid Vesta primarily motivated the work of Dawn's Satellite Working Group (SWG) in summer of 2011. Dawn mission scientists and engineers utilized various computational tools and techniques for Vesta's satellite search. The objectives of this paper are to 1) introduce the natural satellite search problem, 2) present the computational challenges, approaches, and tools used when addressing this problem, and 3) describe applications of various image processing and computational algorithms for performing satellite searches to the electronic imaging and computer science community. Furthermore, we hope that this communication would enable Dawn mission scientists to improve their satellite search algorithms and tools and be better prepared for performing the same investigation in 2015, when the spacecraft is scheduled to approach and orbit the dwarf planet Ceres.
Searching U.S. Patents: Core Collection and Suggestions for Service.
ERIC Educational Resources Information Center
Harwell, Kevin R.
1993-01-01
Provides fundamental information about patents, describes effective and affordable reference resources, and discusses specific issues in providing patent information services to inventors and other patrons. Basic resources, including CD-ROM products, patent classification and searching resources, and other search tools are described in an…
2003-09-01
0-933957-31-9 311 Application of the Biosonar Measurement Tool (BMT) and Instrumented...dolphin biosonar (echolocation). Research work conducted by the Navy has addressed the characteristics of echolocation clicks, mechanisms of...information on dolphin echolocation that can be data mined for biosonar search strategies under real-world conditions. Results can be applied to the
Global Emergency Medicine: A Review of the Literature From 2015.
Becker, Torben K; Hansoti, Bhakti; Bartels, Susan; Bisanzo, Mark; Jacquet, Gabrielle A; Lunney, Kevin; Marsh, Regan; Osei-Ampofo, Maxwell; Trehan, Indi; Lam, Christopher; Levine, Adam C
2016-10-01
The Global Emergency Medicine Literature Review (GEMLR) conducts an annual search of peer-reviewed and gray literature relevant to global emergency medicine (EM) to identify, review, and disseminate the most important new research in this field to a global audience of academics and clinical practitioners. This year 12,435 articles written in six languages were identified by our search. These articles were distributed among 20 reviewers for initial screening based on their relevance to the field of global EM. An additional two reviewers searched the gray literature. A total of 723 articles were deemed appropriate by at least one reviewer and approved by their editor for formal scoring of overall quality and importance. Two independent reviewers scored all articles. A total of 723 articles met our predetermined inclusion criteria and underwent full review. Sixty percent were categorized as emergency care in resource-limited settings (ECRLS), 17% as EM development (EMD), and 23% as disaster and humanitarian response (DHR). Twenty-four articles received scores of 18.5 or higher out of a maximum score 20 and were selected for formal summary and critique. Inter-rater reliability between reviewers gave an intraclass correlation coefficient of 0.71 (95% confidence interval = 0.66 to 0.75). Studies and reviews with a focus on infectious diseases, trauma, and the diagnosis and treatment of diseases common in resource-limited settings represented the majority of articles selected for final review. In 2015, there were almost twice as many articles found by our search compared to the 2014 review. The number of EMD articles increased, while the number ECRLS articles decreased. The number of DHR articles remained stable. As in prior years, the majority of articles focused on infectious diseases. © 2016 by the Society for Academic Emergency Medicine.
The Human Transcript Database: A Catalogue of Full Length cDNA Inserts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouckk John; Michael McLeod; Kim Worley
1999-09-10
The BCM Search Launcher provided improved access to web-based sequence analysis services during the granting period and beyond. The Search Launcher web site grouped analysis procedures by function and provided default parameters that provided reasonable search results for most applications. For instance, most queries were automatically masked for repeat sequences prior to sequence database searches to avoid spurious matches. In addition to the web-based access and arrangements that were made using the functions easier, the BCM Search Launcher provided unique value-added applications like the BEAUTY sequence database search tool that combined information about protein domains and sequence database search resultsmore » to give an enhanced, more complete picture of the reliability and relative value of the information reported. This enhanced search tool made evaluating search results more straight-forward and consistent. Some of the favorite features of the web site are the sequence utilities and the batch client functionality that allows processing of multiple samples from the command line interface. One measure of the success of the BCM Search Launcher is the number of sites that have adopted the models first developed on the site. The graphic display on the BLAST search from the NCBI web site is one such outgrowth, as is the display of protein domain search results within BLAST search results, and the design of the Biology Workbench application. The logs of usage and comments from users confirm the great utility of this resource.« less
Development and Validation of a Self-reported Questionnaire for Measuring Internet Search Dependence
Wang, Yifan; Wu, Lingdan; Zhou, Hongli; Xu, Jiaojing; Dong, Guangheng
2016-01-01
Internet search has become the most common way that people deal with issues and problems in everyday life. The wide use of Internet search has largely changed the way people search for and store information. There is a growing interest in the impact of Internet search on users’ affect, cognition, and behavior. Thus, it is essential to develop a tool to measure the changes in psychological characteristics as a result of long-term use of Internet search. The aim of this study is to develop a Questionnaire on Internet Search Dependence (QISD) and test its reliability and validity. We first proposed a preliminary structure and items of the QISD based on literature review, supplemental investigations, and interviews. And then, we assessed the psychometric properties and explored the factor structure of the initial version via exploratory factor analysis (EFA). The EFA results indicated that four dimensions of the QISD were very reliable, i.e., habitual use of Internet search, withdrawal reaction, Internet search trust, and external storage under Internet search. Finally, we tested the factor solution obtained from EFA through confirmatory factor analysis (CFA). The results of CFA confirmed that the four dimensions model fits the data well. In all, this study suggests that the 12-item QISD is of high reliability and validity and can serve as a preliminary tool to measure the features of Internet search dependence. PMID:28066753
Wang, Yifan; Wu, Lingdan; Zhou, Hongli; Xu, Jiaojing; Dong, Guangheng
2016-01-01
Internet search has become the most common way that people deal with issues and problems in everyday life. The wide use of Internet search has largely changed the way people search for and store information. There is a growing interest in the impact of Internet search on users' affect, cognition, and behavior. Thus, it is essential to develop a tool to measure the changes in psychological characteristics as a result of long-term use of Internet search. The aim of this study is to develop a Questionnaire on Internet Search Dependence (QISD) and test its reliability and validity. We first proposed a preliminary structure and items of the QISD based on literature review, supplemental investigations, and interviews. And then, we assessed the psychometric properties and explored the factor structure of the initial version via exploratory factor analysis (EFA). The EFA results indicated that four dimensions of the QISD were very reliable, i.e., habitual use of Internet search, withdrawal reaction, Internet search trust, and external storage under Internet search. Finally, we tested the factor solution obtained from EFA through confirmatory factor analysis (CFA). The results of CFA confirmed that the four dimensions model fits the data well. In all, this study suggests that the 12-item QISD is of high reliability and validity and can serve as a preliminary tool to measure the features of Internet search dependence.
Simple tools for assembling and searching high-density picolitre pyrophosphate sequence data.
Parker, Nicolas J; Parker, Andrew G
2008-04-18
The advent of pyrophosphate sequencing makes large volumes of sequencing data available at a lower cost than previously possible. However, the short read lengths are difficult to assemble and the large dataset is difficult to handle. During the sequencing of a virus from the tsetse fly, Glossina pallidipes, we found the need for tools to search quickly a set of reads for near exact text matches. A set of tools is provided to search a large data set of pyrophosphate sequence reads under a "live" CD version of Linux on a standard PC that can be used by anyone without prior knowledge of Linux and without having to install a Linux setup on the computer. The tools permit short lengths of de novo assembly, checking of existing assembled sequences, selection and display of reads from the data set and gathering counts of sequences in the reads. Demonstrations are given of the use of the tools to help with checking an assembly against the fragment data set; investigating homopolymer lengths, repeat regions and polymorphisms; and resolving inserted bases caused by incomplete chain extension. The additional information contained in a pyrophosphate sequencing data set beyond a basic assembly is difficult to access due to a lack of tools. The set of simple tools presented here would allow anyone with basic computer skills and a standard PC to access this information.
van den Broek, Janneke M; Brunsveld-Reinders, Anja H; Zedlitz, Aglaia M E E; Girbes, Armand R J; de Jonge, Evert; Arbous, M Sesmu
2015-08-01
To perform a systematic review of the literature to determine which questionnaires are currently available to measure family satisfaction with care on the ICU and to provide an overview of their quality by evaluating their psychometric properties. We searched PubMed, Embase, The Cochrane Library, Web of Science, PsycINFO, and CINAHL from inception to October 30, 2013. Experimental and observational research articles reporting on questionnaires on family satisfaction and/or needs in the ICU were included. Two reviewers determined eligibility. Design, application mode, language, and the number of studies of the tools were registered. With this information, the tools were globally categorized according to validity and reliability: level I (well-established quality), II (approaching well-established quality), III (promising quality), or IV (unconfirmed quality). The quality of the highest level (I) tools was assessed by further examination of the psychometric properties and sample size of the studies. The search detected 3,655 references, from which 135 articles were included. We found 27 different tools that assessed overall or circumscribed aspects of family satisfaction with ICU care. Only four questionnaires were categorized as level I: the Critical Care Family Needs Inventory, the Society of Critical Care Medicine Family Needs Assessment, the Critical Care Family Satisfaction Survey, and the Family Satisfaction in the Intensive Care Unit. Studies on these questionnaires were of good sample size (n ≥ 100) and showed adequate data on face/content validity and internal consistency. Studies on the Critical Care Family Needs Inventory, the Family Satisfaction in the Intensive Care Unit also contained sufficient data on inter-rater/test-retest reliability, responsiveness, and feasibility. In general, data on measures of central tendency and sensitivity to change were scarce. Of all the questionnaires found, the Critical Care Family Needs Inventory and the Family Satisfaction in the Intensive Care Unit were the most reliable and valid in relation to their psychometric properties. However, a universal "best questionnaire" is indefinable because it depends on the specific goal, context, and population used in the inquiry.
NASA Astrophysics Data System (ADS)
Hassan, Rania A.
In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe
2010-03-31
GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less
Global search and rescue - A new concept. [orbital digital radar system with passive reflectors
NASA Technical Reports Server (NTRS)
Sivertson, W. E., Jr.
1976-01-01
A new terrestrial search and rescue concept is defined embodying the use of simple passive radiofreqeuncy reflectors in conjunction with a low earth-orbiting, all-weather, synthetic aperture radar to detect, identify, and position locate earth-bound users in distress. Users include ships, aircraft, small boats, explorers, hikers, etc. Airborne radar tests were conducted to evaluate the basic concept. Both X-band and L-band, dual polarization radars were operated simultaneously. Simple, relatively small, corner-reflector targets were successfully imaged and digital data processing approaches were investigated. Study of the basic concept and evaluation of results obtained from aircraft flight tests indicate an all-weather, day or night, global search and rescue system is feasible.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Donadini, F.
2007-12-01
The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all available measurements and derived properties from paleomagnetic studies of directions and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and will soon implement two search nodes, one for paleomagnetism and one for rock magnetism. Currently the PMAG node is operational. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. Users can also browse the database by data type or by data compilation to view all contributions associated with well known earlier collections like PINT, GMPDB or PSVRL. The query result set is displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, where appropriate, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (version 2.3) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process tens of thousands of data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they remain available for download by the public (in both text and Excel format). Finally, the contents of these template files are automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. During the upload process the owner has the option of keeping the contribution private so it can be viewed in the context of other data sets and visualized using the suite of MagIC plotting tools. Alternatively, the new data can be password protected and shared with a group of users at the contributor's discretion. Once they are published and the owner is comfortable making the upload publicly accessible, the MagIC Editing Committee reviews the contribution for adherence to the MagIC data model and conventions to ensure a high level of data integrity.
OceanVideoLab: A Tool for Exploring Underwater Video
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Morton, J. J.; Wiener, C.
2016-02-01
Video imagery acquired with underwater vehicles is an essential tool for characterizing seafloor ecosystems and seafloor geology. It is a fundamental component of ocean exploration that facilitates real-time operations, augments multidisciplinary scientific research, and holds tremendous potential for public outreach and engagement. Acquiring, documenting, managing, preserving and providing access to large volumes of video acquired with underwater vehicles presents a variety of data stewardship challenges to the oceanographic community. As a result, only a fraction of underwater video content collected with research submersibles is documented, discoverable and/or viewable online. With more than 1 billion users, YouTube offers infrastructure that can be leveraged to help address some of the challenges associated with sharing underwater video with a broad global audience. Anyone can post content to YouTube, and some oceanographic organizations, such as the Schmidt Ocean Institute, have begun live-streaming video directly from underwater vehicles. OceanVideoLab (oceanvideolab.org) was developed to help improve access to underwater video through simple annotation, browse functionality, and integration with related environmental data. Any underwater video that is publicly accessible on YouTube can be registered with OceanVideoLab by simply providing a URL. It is strongly recommended that a navigational file also be supplied to enable geo-referencing of observations. Once a video is registered, it can be viewed and annotated using a simple user interface that integrates observations with vehicle navigation data if provided. This interface includes an interactive map and a list of previous annotations that allows users to jump to times of specific observations in the video. Future enhancements to OceanVideoLab will include the deployment of a search interface, the development of an application program interface (API) that will drive the search and enable querying of content by other systems/tools, the integration of related environmental data from complementary data systems (e.g. temperature, bathymetry), and the expansion of infrastructure to enable broad crowdsourcing of annotations.
KML (Keyhole Markup Language) : a key tool in the education of geo-resources.
NASA Astrophysics Data System (ADS)
Veltz, Isabelle
2015-04-01
Although going on the ground with pupils remains the best way to understand the geologic structure of a deposit, it is very difficult to bring them in a mining extraction site and it is impossible to explore whole regions in search of these resources. For those reasons the KML (with the Google earth interface) is a very complete tool for teaching geosciences. Simple and intuitive, its handling is quickly mastered by the pupils, it also allows the teachers to validate skills for IT certificates. It allows the use of KML files stemming from online banks, from personal productions of the teacher or from pupils' works. These tools offer a global approach in 3D as well as a geolocation-based access to any type of geological data. The resource on which I built this KML is taught in the curriculum of the 3 years of French high school, it is methane hydrate. This non conventional hydrocarbon molecule enters in this vague border between mineral an organic matter (as phosphate deposits). It has become for over ten year the subject of the race for the exploitation of the gas hydrates fields in order to try to supply to the world demand. The methane hydrate fields are very useful and interesting to study the 3 majors themes of geological resource: the exploration, the exploitation and the risks especially for environments and populations. The KML which I propose allows the pupils to put itself in the skin of a geologist in search of deposits or on the technician who is going to extract the resource. It also allows them to evaluate the risks connected to the effect of tectonics activity or climatic changes on the natural or catastrophic releasing of methane and its role in the increase of the greenhouse effect. This KML associated to plenty of pedagogic activities is directly downloadable for teachers at http://eduterre.ens-lyon.fr/eduterre-usages/actualites/methane/.
Agroforestry landscapes and global change: landscape ecology tools for management and conservation
Guillermo Martinez Pastur; Emilie Andrieu; Louis R. Iverson; Pablo Luis Peri
2012-01-01
Forest ecosystems are impacted by multiple uses under the influence of global drivers, and where landscape ecology tools may substantially facilitate the management and conservation of the agroforestry ecosystems. The use of landscape ecology tools was described in the eight papers of the present special issue, including changes in forested landscapes due to...
ERIC Educational Resources Information Center
Glover, Alison; Peters, Carl; Haslett, Simon K.
2011-01-01
Purpose: The purpose of this paper is to test the validity of the curriculum auditing tool Sustainability Tool for Auditing University Curricula in Higher Education (STAUNCH[C]), which was designed to audit the education for sustainability and global citizenship content of higher education curricula. The Welsh Assembly Government aspires to…
Simulation and analysis of differential GPS
NASA Astrophysics Data System (ADS)
Denaro, R. P.
NASA is conducting a research program to evaluate differential Global Positioning System (GPS) concepts for civil helicopter navigation. It is pointed out that the civil helicopter community will probably be an early user of GPS because of the unique mission operations in areas where precise navigation aids are not available. However, many of these applications involve accuracy requirements which cannot be satisfied by conventional GPS. Such applications include remote area search and rescue, offshore oil platform approach, remote area precision landing, and other precise navigation operations. Differential GPS provides a promising approach for meeting very demanding accuracy requirements. The considered procedure eliminates some of the common bias errors experienced by conventional GPS. This is done by making use of a second GPS receiver. A simulation process is developed as a tool for analyzing various scenarios of GPS-referenced civil aircraft navigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Hong; Liu, Jian; Xiao, Jianyuan
Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 10(9), degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani's theory and conjecture on nonlinearmore » Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.« less
What if Finding Data was as Easy as Subscribing to the News?
NASA Astrophysics Data System (ADS)
Duerr, R. E.
2011-12-01
Data are the "common wealth of humanity," the fuel that drives the sciences; but much of the data that exist are inaccessible, buried in one of numerous stove-piped data systems, or entirely hidden unless you have direct knowledge of and contact with the investigator that acquired them. Much of the "wealth" is squandered and overall scientific progress inhibited, a situation that is becoming increasingly untenable with the openness required by data-driven science. What are needed are simple interoperability protocols and advertising mechanisms that allow data from disparate data systems to be easily discovered, explored, and accessed. The tools must be simple enough that individual investigators can use them without IT support. The tools cannot rely on centralized repositories or registries but must enable the development of ad-hoc or special purpose aggregations of data and services tailored to individual community needs. In addition, the protocols must scale to support the discovery of and access to the holdings of the global, interdisciplinary community, be they individual investigators or major data centers. NSIDC, in conjunction with other members of the Federation of Earth Science Information Partners and the Polar Information Commons, are working on just such a suite of tools and protocols. In this talk, I discuss data and service casting, aggregation, data badging, and OpenSearch - a suite of tools and protocols which, when used in conjunction with each other, have the potential of completely changing the way that data and services worldwide are discovered and used.
Efficient RNA structure comparison algorithms.
Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason
2017-12-01
Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.
Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia
2016-01-01
Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs. PMID:27929431
Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia
2016-12-06
Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs.
E-MSD: an integrated data resource for bioinformatics.
Golovin, A; Oldfield, T J; Tate, J G; Velankar, S; Barton, G J; Boutselakis, H; Dimitropoulos, D; Fillon, J; Hussain, A; Ionides, J M C; John, M; Keller, P A; Krissinel, E; McNeil, P; Naim, A; Newman, R; Pajon, A; Pineda, J; Rachedi, A; Copeland, J; Sitnov, A; Sobhany, S; Suarez-Uruena, A; Swaminathan, G J; Tagari, M; Tromm, S; Vranken, W; Henrick, K
2004-01-01
The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the Protein Data Bank (PDB) and to work towards the integration of various bioinformatics data resources. We have implemented a simple form-based interface that allows users to query the MSD directly. The MSD 'atlas pages' show all of the information in the MSD for a particular PDB entry. The group has designed new search interfaces aimed at specific areas of interest, such as the environment of ligands and the secondary structures of proteins. We have also implemented a novel search interface that begins to integrate separate MSD search services in a single graphical tool. We have worked closely with collaborators to build a new visualization tool that can present both structure and sequence data in a unified interface, and this data viewer is now used throughout the MSD services for the visualization and presentation of search results. Examples showcasing the functionality and power of these tools are available from tutorial webpages (http://www. ebi.ac.uk/msd-srv/docs/roadshow_tutorial/).
E-MSD: an integrated data resource for bioinformatics
Golovin, A.; Oldfield, T. J.; Tate, J. G.; Velankar, S.; Barton, G. J.; Boutselakis, H.; Dimitropoulos, D.; Fillon, J.; Hussain, A.; Ionides, J. M. C.; John, M.; Keller, P. A.; Krissinel, E.; McNeil, P.; Naim, A.; Newman, R.; Pajon, A.; Pineda, J.; Rachedi, A.; Copeland, J.; Sitnov, A.; Sobhany, S.; Suarez-Uruena, A.; Swaminathan, G. J.; Tagari, M.; Tromm, S.; Vranken, W.; Henrick, K.
2004-01-01
The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the Protein Data Bank (PDB) and to work towards the integration of various bioinformatics data resources. We have implemented a simple form-based interface that allows users to query the MSD directly. The MSD ‘atlas pages’ show all of the information in the MSD for a particular PDB entry. The group has designed new search interfaces aimed at specific areas of interest, such as the environment of ligands and the secondary structures of proteins. We have also implemented a novel search interface that begins to integrate separate MSD search services in a single graphical tool. We have worked closely with collaborators to build a new visualization tool that can present both structure and sequence data in a unified interface, and this data viewer is now used throughout the MSD services for the visualization and presentation of search results. Examples showcasing the functionality and power of these tools are available from tutorial webpages (http://www.ebi.ac.uk/msd-srv/docs/roadshow_tutorial/). PMID:14681397
RAPSearch: a fast protein similarity search tool for short reads
2011-01-01
Background Next Generation Sequencing (NGS) is producing enormous corpuses of short DNA reads, affecting emerging fields like metagenomics. Protein similarity search--a key step to achieve annotation of protein-coding genes in these short reads, and identification of their biological functions--faces daunting challenges because of the very sizes of the short read datasets. Results We developed a fast protein similarity search tool RAPSearch that utilizes a reduced amino acid alphabet and suffix array to detect seeds of flexible length. For short reads (translated in 6 frames) we tested, RAPSearch achieved ~20-90 times speedup as compared to BLASTX. RAPSearch missed only a small fraction (~1.3-3.2%) of BLASTX similarity hits, but it also discovered additional homologous proteins (~0.3-2.1%) that BLASTX missed. By contrast, BLAT, a tool that is even slightly faster than RAPSearch, had significant loss of sensitivity as compared to RAPSearch and BLAST. Conclusions RAPSearch is implemented as open-source software and is accessible at http://omics.informatics.indiana.edu/mg/RAPSearch. It enables faster protein similarity search. The application of RAPSearch in metageomics has also been demonstrated. PMID:21575167
Oregon State University | Oregon State University
Services About Academics Research Outreach Athletics OSU150 Current Students Online Students Future Students Faculty and Staff Parents and Family Open Menu Open Search search for people and pages Search OSU - and ours. More Research. Virtual Tour Tools and Services Audience Menu Future Students Current
Creating a Classroom Kaleidoscope with the World Wide Web.
ERIC Educational Resources Information Center
Quinlan, Laurie A.
1997-01-01
Discusses the elements of classroom Web presentations: planning; construction, including design tips; classroom use; and assessment. Lists 14 World Wide Web resources for K-12 teachers; Internet search tools (directories, search engines and meta-search engines); a Web glossary; and an example of HTML for a simple Web page. (PEN)
Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel
2016-01-01
When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221
ERIC Educational Resources Information Center
Williams, Carrick C.; Pollatsek, Alexander; Cave, Kyle R.; Stroud, Michael J.
2009-01-01
In 2 experiments, eye movements were examined during searches in which elements were grouped into four 9-item clusters. The target (a red or blue "T") was known in advance, and each cluster contained different numbers of target-color elements. Rather than color composition of a cluster invariantly guiding the order of search though…
Smith, R F; Wiese, B A; Wojzynski, M K; Davison, D B; Worley, K C
1996-05-01
The BCM Search Launcher is an integrated set of World Wide Web (WWW) pages that organize molecular biology-related search and analysis services available on the WWW by function, and provide a single point of entry for related searches. The Protein Sequence Search Page, for example, provides a single sequence entry form for submitting sequences to WWW servers that offer remote access to a variety of different protein sequence search tools, including BLAST, FASTA, Smith-Waterman, BEAUTY, PROSITE, and BLOCKS searches. Other Launch pages provide access to (1) nucleic acid sequence searches, (2) multiple and pair-wise sequence alignments, (3) gene feature searches, (4) protein secondary structure prediction, and (5) miscellaneous sequence utilities (e.g., six-frame translation). The BCM Search Launcher also provides a mechanism to extend the utility of other WWW services by adding supplementary hypertext links to results returned by remote servers. For example, links to the NCBI's Entrez data base and to the Sequence Retrieval System (SRS) are added to search results returned by the NCBI's WWW BLAST server. These links provide easy access to auxiliary information, such as Medline abstracts, that can be extremely helpful when analyzing BLAST data base hits. For new or infrequent users of sequence data base search tools, we have preset the default search parameters to provide the most informative first-pass sequence analysis possible. We have also developed a batch client interface for Unix and Macintosh computers that allows multiple input sequences to be searched automatically as a background task, with the results returned as individual HTML documents directly to the user's system. The BCM Search Launcher and batch client are available on the WWW at URL http:@gc.bcm.tmc.edu:8088/search-launcher.html.
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R.; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W.; Moritz, Robert L.
2016-01-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contributes to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), that enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the following iterations. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website. PMID:26419769
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W; Moritz, Robert L
2015-11-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contribute to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), which enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post-search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the iterations that follow. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R.; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W.; Moritz, Robert L.
2015-11-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contribute to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), which enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post-search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the iterations that follow. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website.
On the Local Convergence of Pattern Search
NASA Technical Reports Server (NTRS)
Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.
Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David
2016-01-01
Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.
Finding the global minimum: a fuzzy end elimination implementation
NASA Technical Reports Server (NTRS)
Keller, D. A.; Shibata, M.; Marcus, E.; Ornstein, R. L.; Rein, R.
1995-01-01
The 'fuzzy end elimination theorem' (FEE) is a mathematically proven theorem that identifies rotameric states in proteins which are incompatible with the global minimum energy conformation. While implementing the FEE we noticed two different aspects that directly affected the final results at convergence. First, the identification of a single dead-ending rotameric state can trigger a 'domino effect' that initiates the identification of additional rotameric states which become dead-ending. A recursive check for dead-ending rotameric states is therefore necessary every time a dead-ending rotameric state is identified. It is shown that, if the recursive check is omitted, it is possible to miss the identification of some dead-ending rotameric states causing a premature termination of the elimination process. Second, we examined the effects of removing dead-ending rotameric states from further considerations at different moments of time. Two different methods of rotameric state removal were examined for an order dependence. In one case, each rotamer found to be incompatible with the global minimum energy conformation was removed immediately following its identification. In the other, dead-ending rotamers were marked for deletion but retained during the search, so that they influenced the evaluation of other rotameric states. When the search was completed, all marked rotamers were removed simultaneously. In addition, to expand further the usefulness of the FEE, a novel method is presented that allows for further reduction in the remaining set of conformations at the FEE convergence. In this method, called a tree-based search, each dead-ending pair of rotamers which does not lead to the direct removal of either rotameric state is used to reduce significantly the number of remaining conformations. In the future this method can also be expanded to triplet and quadruplet sets of rotameric states. We tested our implementation of the FEE by exhaustively searching ten protein segments and found that the FEE identified the global minimum every time. For each segment, the global minimum was exhaustively searched in two different environments: (i) the segments were extracted from the protein and exhaustively searched in the absence of the surrounding residues; (ii) the segments were exhaustively searched in the presence of the remaining residues fixed at crystal structure conformations. We also evaluated the performance of the method for accurately predicting side chain conformations. We examined the influence of factors such as type and accuracy of backbone template used, and the restrictions imposed by the choice of potential function, parameterization and rotamer database. Conclusions are drawn on these results and future prospects are given.
Curating the Web: Building a Google Custom Search Engine for the Arts
ERIC Educational Resources Information Center
Hennesy, Cody; Bowman, John
2008-01-01
Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…
Tachyon search speeds up retrieval of similar sequences by several orders of magnitude.
Tan, Joshua; Kuchibhatla, Durga; Sirota, Fernanda L; Sherman, Westley A; Gattermayer, Tobias; Kwoh, Chia Yee; Eisenhaber, Frank; Schneider, Georg; Maurer-Stroh, Sebastian
2012-06-15
The usage of current sequence search tools becomes increasingly slower as databases of protein sequences continue to grow exponentially. Tachyon, a new algorithm that identifies closely related protein sequences ~200 times faster than standard BLAST, circumvents this limitation with a reduced database and oligopeptide matching heuristic. The tool is publicly accessible as a webserver at http://tachyon.bii.a-star.edu.sg and can also be accessed programmatically through SOAP.
Simrank: Rapid and sensitive general-purpose k-mer search tool
2011-01-01
Background Terabyte-scale collections of string-encoded data are expected from consortia efforts such as the Human Microbiome Project http://nihroadmap.nih.gov/hmp. Intra- and inter-project data similarity searches are enabled by rapid k-mer matching strategies. Software applications for sequence database partitioning, guide tree estimation, molecular classification and alignment acceleration have benefited from embedded k-mer searches as sub-routines. However, a rapid, general-purpose, open-source, flexible, stand-alone k-mer tool has not been available. Results Here we present a stand-alone utility, Simrank, which allows users to rapidly identify database strings the most similar to query strings. Performance testing of Simrank and related tools against DNA, RNA, protein and human-languages found Simrank 10X to 928X faster depending on the dataset. Conclusions Simrank provides molecular ecologists with a high-throughput, open source choice for comparing large sequence sets to find similarity. PMID:21524302
The epidemiological modelling of dysthymia: application for the Global Burden of Disease Study 2010.
Charlson, Fiona J; Ferrari, Alize J; Flaxman, Abraham D; Whiteford, Harvey A
2013-10-01
In order to capture the differences in burden between the subtypes of depression, the Global Burden of Disease 2010 Study for the first time estimated the burden of dysthymia and major depressive disorder separately from the previously used umbrella term 'unipolar depression'. A global summary of epidemiological parameters are necessary inputs in burden of disease calculations for 21 world regions, males and females and for the year 1990, 2005 and 2010. This paper reports findings from a systematic review of global epidemiological data and the subsequent development of an internally consistent epidemiological model of dysthymia. A systematic search was conducted to identify data sources for the prevalence, incidence, remission and excess-mortality of dysthymia using Medline, PsycINFO and EMBASE electronic databases and grey literature. DisMod-MR, a Bayesian meta-regression tool, was used to check the epidemiological parameters for internal consistency and to predict estimates for world regions with no or few data. The systematic review identified 38 studies meeting inclusion criteria which provided 147 data points for 30 countries in 13 of 21 world regions. Prevalence increases in the early ages, peaking at around 50 years. Females have higher prevalence of dysthymia than males. Global pooled prevalence remained constant across time points at 1.55% (95%CI 1.50-1.60). There was very little regional variation in prevalence estimates. There were eight GBD world regions for which we found no data for which DisMod-MR had to impute estimates. The addition of internally consistent epidemiological estimates by world region, age, sex and year for dysthymia contributed to a more comprehensive estimate of mental health burden in GBD 2010. © 2013 Elsevier B.V. All rights reserved.
Synthesis: Deriving a Core Set of Recommendations to Optimize Diabetes Care on a Global Scale.
Mechanick, Jeffrey I; Leroith, Derek
2015-01-01
Diabetes afflicts 382 million people worldwide, with increasing prevalence rates and adverse effects on health, well-being, and society in general. There are many drivers for the complex presentation of diabetes, including environmental and genetic/epigenetic factors. The aim was to synthesize a core set of recommendations from information from 14 countries that can be used to optimize diabetes care on a global scale. Information from 14 papers in this special issue of Annals of Global Health was reviewed, analyzed, and sorted to synthesize recommendations. PubMed was searched for relevant studies on diabetes and global health. Key findings are as follows: (1) Population-based transitions distinguish region-specific diabetes care; (2) biological drivers for diabetes differ among various populations and need to be clarified scientifically; (3) principal resource availability determines quality-of-care metrics; and (4) governmental involvement, independent of economic barriers, improves the contextualization of diabetes care. Core recommendations are as follows: (1) Each nation should assess region-specific epidemiology, the scientific evidence base, and population-based transitions to establish risk-stratified guidelines for diagnosis and therapeutic interventions; (2) each nation should establish a public health imperative to provide tools and funding to successfully implement these guidelines; and (3) each nation should commit to education and research to optimize recommendations for a durable effect. Systematic acquisition of information about diabetes care can be analyzed, extrapolated, and then used to provide a core set of actionable recommendations that may be further studied and implemented to improve diabetes care on a global scale. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Explorative search of distributed bio-data to answer complex biomedical questions
2014-01-01
Background The huge amount of biomedical-molecular data increasingly produced is providing scientists with potentially valuable information. Yet, such data quantity makes difficult to find and extract those data that are most reliable and most related to the biomedical questions to be answered, which are increasingly complex and often involve many different biomedical-molecular aspects. Such questions can be addressed only by comprehensively searching and exploring different types of data, which frequently are ordered and provided by different data sources. Search Computing has been proposed for the management and integration of ranked results from heterogeneous search services. Here, we present its novel application to the explorative search of distributed biomedical-molecular data and the integration of the search results to answer complex biomedical questions. Results A set of available bioinformatics search services has been modelled and registered in the Search Computing framework, and a Bioinformatics Search Computing application (Bio-SeCo) using such services has been created and made publicly available at http://www.bioinformatics.deib.polimi.it/bio-seco/seco/. It offers an integrated environment which eases search, exploration and ranking-aware combination of heterogeneous data provided by the available registered services, and supplies global results that can support answering complex multi-topic biomedical questions. Conclusions By using Bio-SeCo, scientists can explore the very large and very heterogeneous biomedical-molecular data available. They can easily make different explorative search attempts, inspect obtained results, select the most appropriate, expand or refine them and move forward and backward in the construction of a global complex biomedical query on multiple distributed sources that could eventually find the most relevant results. Thus, it provides an extremely useful automated support for exploratory integrated bio search, which is fundamental for Life Science data driven knowledge discovery. PMID:24564278
Searching bioremediation patents through Cooperative Patent Classification (CPC).
Prasad, Rajendra
2016-03-01
Patent classification systems have traditionally evolved independently at each patent jurisdiction to classify patents handled by their examiners to be able to search previous patents while dealing with new patent applications. As patent databases maintained by them went online for free access to public as also for global search of prior art by examiners, the need arose for a common platform and uniform structure of patent databases. The diversity of different classification, however, posed problems of integrating and searching relevant patents across patent jurisdictions. To address this problem of comparability of data from different sources and searching patents, WIPO in the recent past developed what is known as International Patent Classification (IPC) system which most countries readily adopted to code their patents with IPC codes along with their own codes. The Cooperative Patent Classification (CPC) is the latest patent classification system based on IPC/European Classification (ECLA) system, developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) which is likely to become a global standard. This paper discusses this new classification system with reference to patents on bioremediation.
Serving Fisheries and Ocean Metadata to Communities Around the World
NASA Technical Reports Server (NTRS)
Meaux, Melanie
2006-01-01
NASA's Global Change Master Directory (GCMD) assists the oceanographic community in the discovery, access, and sharing of scientific data by serving on-line fisheries and ocean metadata to users around the globe. As of January 2006, the directory holds more than 16,300 Earth Science data descriptions and over 1,300 services descriptions. Of these, nearly 4,000 unique ocean-related metadata records are available to the public, with many having direct links to the data. In 2005, the GCMD averaged over 5 million hits a month, with nearly a half million unique hosts for the year. Through the GCMD portal (http://qcrnd.nasa.qov/), users can search vast and growing quantities of data and services using controlled keywords, free-text searches or a combination of both. Users may now refine a search based on topic, location, instrument, platform, project, data center, spatial and temporal coverage. The directory also offers data holders a means to post and search their data through customized portals, i.e. online customized subset metadata directories. The discovery metadata standard used is the Directory Interchange Format (DIF), adopted in 1994. This format has evolved to accommodate other national and international standards such as FGDC and IS019115. Users can submit metadata through easy-to-use online and offline authoring tools. The directory, which also serves as a coordinating node of the International Directory Network (IDN), has been active at the international, regional and national level for many years through its involvement with the Committee on Earth Observation Satellites (CEOS), federal agencies (such as NASA, NOAA, and USGS), international agencies (such as IOC/IODE, UN, and JAXA) and partnerships (such as ESIP, IOOS/DMAC, GOSIC, GLOBEC, OBIS, and GoMODP), sharing experience, knowledge related to metadata and/or data management and interoperability.
Serving Fisheries and Ocean Metadata to Communities Around the World
NASA Technical Reports Server (NTRS)
Meaux, Melanie F.
2007-01-01
NASA's Global Change Master Directory (GCMD) assists the oceanographic community in the discovery, access, and sharing of scientific data by serving on-line fisheries and ocean metadata to users around the globe. As of January 2006, the directory holds more than 16,300 Earth Science data descriptions and over 1,300 services descriptions. Of these, nearly 4,000 unique ocean-related metadata records are available to the public, with many having direct links to the data. In 2005, the GCMD averaged over 5 million hits a month, with nearly a half million unique hosts for the year. Through the GCMD portal (http://gcmd.nasa.gov/), users can search vast and growing quantities of data and services using controlled keywords, free-text searches, or a combination of both. Users may now refine a search based on topic, location, instrument, platform, project, data center, spatial and temporal coverage, and data resolution for selected datasets. The directory also offers data holders a means to advertise and search their data through customized portals, which are subset views of the directory. The discovery metadata standard used is the Directory Interchange Format (DIF), adopted in 1988. This format has evolved to accommodate other national and international standards such as FGDC and IS019115. Users can submit metadata through easy-to-use online and offline authoring tools. The directory, which also serves as the International Directory Network (IDN), has been providing its services and sharing its experience and knowledge of metadata at the international, national, regional, and local level for many years. Active partners include the Committee on Earth Observation Satellites (CEOS), federal agencies (such as NASA, NOAA, and USGS), international agencies (such as IOC/IODE, UN, and JAXA) and organizations (such as ESIP, IOOS/DMAC, GOSIC, GLOBEC, OBIS, and GoMODP).
The Antarctic Master Directory -- the Electronic Card Catalog of Antarctic Data
NASA Astrophysics Data System (ADS)
Scharfen, G.; Bauer, R.
2003-12-01
The Antarctic Master Directory (AMD) is a Web-based, searchable record of thousands of Antarctic data descriptions. These data descriptions contain information about what data were collected, where they were collected, when they were collected, who the scientists are, who the point of contact is, how to get the data, and information about the format of the data and what documentation and bibliographic information exists. With this basic descriptive information about content and access for thousands of Antarctic scientific data sets, the AMD is a resource for scientists to advertise the data they have collected and to search for data they need. The AMD has been created by more than twenty nations which conduct research in the Antarctic under the auspices of the Antarctic Treaty. It is a part of the International Directory Network/Global Change Master Directory (IDN/GCMD). Using the AMD is easy. Users can search on subject matter key words, data types, geographic place-names, temporal or spatial ranges, or conduct free-text searches. To search the AMD go to: http://gcmd.nasa.gov/Data/portals/amd/. Contributing your own data descriptions for Antarctic data that you have collected is also easy. Scientists can start by submitting a short data description first (as a placeholder in the AMD, and to satisfy National Science Foundation (NSF) reporting requirements), and then add to, modify or update their record whenever it is appropriate. An easy to use on-line tool and a simple tutorial are available at: http://nsidc.org/usadcc. With NSF Office of Polar Programs (OPP) funding, the National Snow and Ice Data Center (NSIDC) operates the U.S. Antarctic Data Coordination Center (USADCC), partly to assist scientists in using and contributing to the AMD. The USADCC website is at http://nsidc.org/usadcc.
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
Smoking cessation support for pregnant women: role of mobile technology
Heminger, Christina L; Schindler-Ruwisch, Jennifer M; Abroms, Lorien C
2016-01-01
Background Smoking during pregnancy has deleterious health effects for the fetus and mother. Given the high risks associated with smoking in pregnancy, smoking cessation programs that are designed specifically for pregnant smokers are needed. This paper summarizes the current landscape of mHealth cessation programs aimed at pregnant smokers and where available reviews evidence to support their use. Methods A search strategy was conducted in June–August 2015 to identify mHealth programs with at least one component or activity that was explicitly directed at smoking cessation assistance for pregnant women. The search for text messaging programs and applications included keyword searches within public health and medical databases of peer-reviewed literature, Google Play/iTunes stores, and gray literature via Google. Results Five unique short message service programs and two mobile applications were identified and reviewed. Little evidence was identified to support their use. Common tools and features identified included the ability to set your quit date, ability to track smoking status, ability to get help during cravings, referral to quitline, and tailored content for the individual participant. The theoretical approach utilized was varied, and approximately half of the programs included pregnancy-related content, in addition to cessation content. With one exception, the mHealth programs identified were found to have low enrollment. Conclusion Globally, there are a handful of applications and text-based mHealth programs available for pregnant smokers. Future studies are needed that examine the efficacy of such programs, as well as strategies to best promote enrollment. PMID:27110146
PubMed vs. HighWire Press: a head-to-head comparison of two medical literature search engines.
Vanhecke, Thomas E; Barnes, Michael A; Zimmerman, Janet; Shoichet, Sandor
2007-09-01
PubMed and HighWire Press are both useful medical literature search engines available for free to anyone on the internet. We measured retrieval accuracy, number of results generated, retrieval speed, features and search tools on HighWire Press and PubMed using the quick search features of each. We found that using HighWire Press resulted in a higher likelihood of retrieving the desired article and higher number of search results than the same search on PubMed. PubMed was faster than HighWire Press in delivering search results regardless of search settings. There are considerable differences in search features between these two search engines.
Ocean Drilling Program: Mirror Sites
Publication services and products Drilling services and tools Online Janus database Search the ODP/TAMU web information, see www.iodp-usio.org. ODP | Search | Database | Drilling | Publications | Science | Cruise Info
Ocean Drilling Program: TAMU Staff Directory
products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main web site Employment Opportunities ODP | Search | Database | Drilling | Publications | Science | Cruise Info | Public
Indicators and Measurement Tools for Health Systems Integration: A Knowledge Synthesis.
Suter, Esther; Oelke, Nelly D; da Silva Lima, Maria Alice Dias; Stiphout, Michelle; Janke, Robert; Witt, Regina Rigatto; Van Vliet-Brown, Cheryl; Schill, Kaela; Rostami, Mahnoush; Hepp, Shelanne; Birney, Arden; Al-Roubaiai, Fatima; Marques, Giselda Quintana
2017-11-13
Despite far reaching support for integrated care, conceptualizing and measuring integrated care remains challenging. This knowledge synthesis aimed to identify indicator domains and tools to measure progress towards integrated care. We used an established framework and a Delphi survey with integration experts to identify relevant measurement domains. For each domain, we searched and reviewed the literature for relevant tools. From 7,133 abstracts, we retrieved 114 unique tools. We found many quality tools to measure care coordination, patient engagement and team effectiveness/performance. In contrast, there were few tools in the domains of performance measurement and information systems, alignment of organizational goals and resource allocation. The search yielded 12 tools that measure overall integration or three or more indicator domains. Our findings highlight a continued gap in tools to measure foundational components that support integrated care. In the absence of such targeted tools, "overall integration" tools may be useful for a broad assessment of the overall state of a system. Continued progress towards integrated care depends on our ability to evaluate the success of strategies across different levels and context. This study has identified 114 tools that measure integrated care across 16 domains, supporting efforts towards a unified measurement framework.
Fast protein tertiary structure retrieval based on global surface shape similarity.
Sael, Lee; Li, Bin; La, David; Fang, Yi; Ramani, Karthik; Rustamov, Raif; Kihara, Daisuke
2008-09-01
Characterization and identification of similar tertiary structure of proteins provides rich information for investigating function and evolution. The importance of structure similarity searches is increasing as structure databases continue to expand, partly due to the structural genomics projects. A crucial drawback of conventional protein structure comparison methods, which compare structures by their main-chain orientation or the spatial arrangement of secondary structure, is that a database search is too slow to be done in real-time. Here we introduce a global surface shape representation by three-dimensional (3D) Zernike descriptors, which represent a protein structure compactly as a series expansion of 3D functions. With this simplified representation, the search speed against a few thousand structures takes less than a minute. To investigate the agreement between surface representation defined by 3D Zernike descriptor and conventional main-chain based representation, a benchmark was performed against a protein classification generated by the combinatorial extension algorithm. Despite the different representation, 3D Zernike descriptor retrieved proteins of the same conformation defined by combinatorial extension in 89.6% of the cases within the top five closest structures. The real-time protein structure search by 3D Zernike descriptor will open up new possibility of large-scale global and local protein surface shape comparison. 2008 Wiley-Liss, Inc.