A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.
Huang, Shiping; Wu, Zhifeng; Misra, Anil
2017-12-11
Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.
Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba
2013-02-01
Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.
2010-01-01
The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…
NASA Astrophysics Data System (ADS)
Jiang, Y.
2015-12-01
Oceanographic resource discovery is a critical step for developing ocean science applications. With the increasing number of resources available online, many Spatial Data Infrastructure (SDI) components (e.g. catalogues and portals) have been developed to help manage and discover oceanographic resources. However, efficient and accurate resource discovery is still a big challenge because of the lack of data relevancy information. In this article, we propose a search engine framework for mining and utilizing dataset relevancy from oceanographic dataset metadata, usage metrics, and user feedback. The objective is to improve discovery accuracy of oceanographic data and reduce time for scientist to discover, download and reformat data for their projects. Experiments and a search example show that the propose engine helps both scientists and general users search for more accurate results with enhanced performance and user experience through a user-friendly interface.
Color Perception in Children with Autism
ERIC Educational Resources Information Center
Franklin, Anna; Sowden, Paul; Burley, Rachel; Notman, Leslie; Alder, Elizabeth
2008-01-01
This study examined whether color perception is atypical in children with autism. In experiment 1, accuracy of color memory and search was compared for children with autism and typically developing children matched on age and non-verbal cognitive ability. Children with autism were significantly less accurate at color memory and search than…
ERIC Educational Resources Information Center
Sutton, Jennifer E.
2006-01-01
Children ages 2, 3 and 4 years participated in a novel hide-and-seek search task presented on a touchscreen monitor. On beacon trials, the target hiding place could be located using a beacon cue, but on landmark trials, searching required the use of a nearby landmark cue. In Experiment 1, 2-year-olds performed less accurately than older children…
Setting the public agenda for online health search: a white paper and action agenda.
Greenberg, Liza; D'Andrea, Guy; Lorence, Dan
2004-06-08
Searches for health information are among the most common reasons that consumers use the Internet. Both consumers and quality experts have raised concerns about the quality of information on the Web and the ability of consumers to find accurate information that meets their needs. To produce a national stakeholder-driven agenda for research, technical improvements, and education that will improve the results of consumer searches for health information on the Internet. URAC, a national accreditation organization, and Consumer WebWatch (CWW), a project of Consumers Union (a consumer advocacy organization), conducted a review of factors influencing the results of online health searches. The organizations convened two stakeholder groups of consumers, quality experts, search engine experts, researchers, health-care providers, informatics specialists, and others. Meeting participants reviewed existing information and developed recommendations for improving the results of online consumer searches for health information. Participants were not asked to vote on or endorse the recommendations. Our working definition of a quality Web site was one that contained accurate, reliable, and complete information. The Internet has greatly improved access to health information for consumers. There is great variation in how consumers seek information via the Internet, and in how successful they are in searching for health information. Further, there is variation among Web sites, both in quality and accessibility. Many Web site features affect the capability of search engines to find and index them. Research is needed to define quality elements of Web sites that could be retrieved by search engines and understand how to meet the needs of different types of searchers. Technological research should seek to develop more sophisticated approaches for tagging information, and to develop searches that "learn" from consumer behavior. Finally, education initiatives are needed to help consumers search more effectively and to help them critically evaluate the information they find.
Setting the Public Agenda for Online Health Search: A White Paper and Action Agenda
D'Andrea, Guy; Lorence, Dan
2004-01-01
Background Searches for health information are among the most common reasons that consumers use the Internet. Both consumers and quality experts have raised concerns about the quality of information on the Web and the ability of consumers to find accurate information that meets their needs. Objective To produce a national stakeholder-driven agenda for research, technical improvements, and education that will improve the results of consumer searches for health information on the Internet. Methods URAC, a national accreditation organization, and Consumer WebWatch (CWW), a project of Consumers Union (a consumer advocacy organization), conducted a review of factors influencing the results of online health searches. The organizations convened two stakeholder groups of consumers, quality experts, search engine experts, researchers, health-care providers, informatics specialists, and others. Meeting participants reviewed existing information and developed recommendations for improving the results of online consumer searches for health information. Participants were not asked to vote on or endorse the recommendations. Our working definition of a quality Web site was one that contained accurate, reliable, and complete information. Results The Internet has greatly improved access to health information for consumers. There is great variation in how consumers seek information via the Internet, and in how successful they are in searching for health information. Further, there is variation among Web sites, both in quality and accessibility. Many Web site features affect the capability of search engines to find and index them. Conclusions Research is needed to define quality elements of Web sites that could be retrieved by search engines and understand how to meet the needs of different types of searchers. Technological research should seek to develop more sophisticated approaches for tagging information, and to develop searches that "learn" from consumer behavior. Finally, education initiatives are needed to help consumers search more effectively and to help them critically evaluate the information they find. PMID:15249267
Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill
2017-01-01
Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding. PMID:28729875
Liao, Wenta; Draper, William M
2013-02-21
The mass-to-structure or MTS Search Engine is an Access 2010 database containing theoretical molecular mass information for 19,438 compounds assembled from common sources such as the Merck Index, pesticide and pharmaceutical compilations, and chemical catalogues. This database, which contains no experimental mass spectral data, was developed as an aid to identification of compounds in atmospheric pressure ionization (API)-LC-MS. This paper describes a powerful upgrade to this database, a fully integrated utility for filtering or ranking candidates based on isotope ratios and patterns. The new MTS Search Engine is applied here to the identification of volatile and semivolatile compounds including pesticides, nitrosoamines and other pollutants. Methane and isobutane chemical ionization (CI) GC-MS spectra were obtained from unit mass resolution mass spectrometers to determine MH(+) masses and isotope ratios. Isotopes were measured accurately with errors of <4% and <6%, respectively, for A + 1 and A + 2 peaks. Deconvolution of interfering isotope clusters (e.g., M(+) and [M - H](+)) was required for accurate determination of the A + 1 isotope in halogenated compounds. Integrating the isotope data greatly improved the speed and accuracy of the database identifications. The database accurately identified unknowns from isobutane CI spectra in 100% of cases where as many as 40 candidates satisfied the mass tolerance. The paper describes the development and basic operation of the new MTS Search Engine and details performance testing with over 50 model compounds.
[Progress in the spectral library based protein identification strategy].
Yu, Derui; Ma, Jie; Xie, Zengyan; Bai, Mingze; Zhu, Yunping; Shu, Kunxian
2018-04-25
Exponential growth of the mass spectrometry (MS) data is exhibited when the mass spectrometry-based proteomics has been developing rapidly. It is a great challenge to develop some quick, accurate and repeatable methods to identify peptides and proteins. Nowadays, the spectral library searching has become a mature strategy for tandem mass spectra based proteins identification in proteomics, which searches the experiment spectra against a collection of confidently identified MS/MS spectra that have been observed previously, and fully utilizes the abundance in the spectrum, peaks from non-canonical fragment ions, and other features. This review provides an overview of the implement of spectral library search strategy, and two key steps, spectral library construction and spectral library searching comprehensively, and discusses the progress and challenge of the library search strategy.
Shteynberg, David; Deutsch, Eric W.; Lam, Henry; Eng, Jimmy K.; Sun, Zhi; Tasman, Natalie; Mendoza, Luis; Moritz, Robert L.; Aebersold, Ruedi; Nesvizhskii, Alexey I.
2011-01-01
The combination of tandem mass spectrometry and sequence database searching is the method of choice for the identification of peptides and the mapping of proteomes. Over the last several years, the volume of data generated in proteomic studies has increased dramatically, which challenges the computational approaches previously developed for these data. Furthermore, a multitude of search engines have been developed that identify different, overlapping subsets of the sample peptides from a particular set of tandem mass spectrometry spectra. We present iProphet, the new addition to the widely used open-source suite of proteomic data analysis tools Trans-Proteomics Pipeline. Applied in tandem with PeptideProphet, it provides more accurate representation of the multilevel nature of shotgun proteomic data. iProphet combines the evidence from multiple identifications of the same peptide sequences across different spectra, experiments, precursor ion charge states, and modified states. It also allows accurate and effective integration of the results from multiple database search engines applied to the same data. The use of iProphet in the Trans-Proteomics Pipeline increases the number of correctly identified peptides at a constant false discovery rate as compared with both PeptideProphet and another state-of-the-art tool Percolator. As the main outcome, iProphet permits the calculation of accurate posterior probabilities and false discovery rate estimates at the level of sequence identical peptide identifications, which in turn leads to more accurate probability estimates at the protein level. Fully integrated with the Trans-Proteomics Pipeline, it supports all commonly used MS instruments, search engines, and computer platforms. The performance of iProphet is demonstrated on two publicly available data sets: data from a human whole cell lysate proteome profiling experiment representative of typical proteomic data sets, and from a set of Streptococcus pyogenes experiments more representative of organism-specific composite data sets. PMID:21876204
Grebner, Christoph; Becker, Johannes; Weber, Daniel; Bellinger, Daniel; Tafipolski, Maxim; Brückner, Charlotte; Engels, Bernd
2014-09-15
The presented program package, Conformational Analysis and Search Tool (CAST) allows the accurate treatment of large and flexible (macro) molecular systems. For the determination of thermally accessible minima CAST offers the newly developed TabuSearch algorithm, but algorithms such as Monte Carlo (MC), MC with minimization, and molecular dynamics are implemented as well. For the determination of reaction paths, CAST provides the PathOpt, the Nudge Elastic band, and the umbrella sampling approach. Access to free energies is possible through the free energy perturbation approach. Along with a number of standard force fields, a newly developed symmetry-adapted perturbation theory-based force field is included. Semiempirical computations are possible through DFTB+ and MOPAC interfaces. For calculations based on density functional theory, a Message Passing Interface (MPI) interface to the Graphics Processing Unit (GPU)-accelerated TeraChem program is available. The program is available on request. Copyright © 2014 Wiley Periodicals, Inc.
The development of organized visual search
Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.
2013-01-01
Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560
Boyer, C; Baujard, V; Scherrer, J R
2001-01-01
Any new user to the Internet will think that to retrieve the relevant document is an easy task especially with the wealth of sources available on this medium, but this is not the case. Even experienced users have difficulty formulating the right query for making the most of a search tool in order to efficiently obtain an accurate result. The goal of this work is to reduce the time and the energy necessary in searching and locating medical and health information. To reach this goal we have developed HONselect [1]. The aim of HONselect is not only to improve efficiency in retrieving documents but to respond to an increased need for obtaining a selection of relevant and accurate documents from a breadth of various knowledge databases including scientific bibliographical references, clinical trials, daily news, multimedia illustrations, conferences, forum, Web sites, clinical cases, and others. The authors based their approach on the knowledge representation using the National Library of Medicine's Medical Subject Headings (NLM, MeSH) vocabulary and classification [2,3]. The innovation is to propose a multilingual "one-stop searching" (one Web interface to databases currently in English, French and German) with full navigational and connectivity capabilities. The user may choose from a given selection of related terms the one that best suit his search, navigate in the term's hierarchical tree, and access directly to a selection of documents from high quality knowledge suppliers such as the MEDLINE database, the NLM's ClinicalTrials.gov server, the NewsPage's daily news, the HON's media gallery, conference listings and MedHunt's Web sites [4, 5, 6, 7, 8, 9]. HONselect, developed by HON, a non-profit organisation [10], is a free online available multilingual tool based on the MeSH thesaurus to index, select, retrieve and display accurate, up to date, high-level and quality documents.
Almeida, Renita A; Dickinson, J Edwin; Maybery, Murray T; Badcock, Johanna C; Badcock, David R
2010-12-01
The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial frequency (RF) patterns with controllable amounts of target/distracter overlap on which high AQ participants showed more efficient search than low AQ observers. The current study extended the design of this search task by adding two lines which traverse the display on random paths sometimes intersecting target/distracters, other times passing between them. As with the EFT, these lines segment and group the display in ways that are task irrelevant. We tested two new groups of observers and found that while RF search was slowed by the addition of segmenting lines for both groups, the high AQ group retained a consistent search advantage (reflected in a shallower gradient for reaction time as a function of set size) over the low AQ group. Further, the high AQ group were significantly faster and more accurate on the EFT compared to the low AQ group. That is, the results from the present RF search task demonstrate that segmentation and grouping created by intersecting lines does not further differentiate the groups and is therefore unlikely to be a critical factor underlying the EFT performance difference. However, once again, we found that superior EFT performance was associated with shallower gradients on the RF search task. Copyright © 2010 Elsevier Ltd. All rights reserved.
Path integration mediated systematic search: a Bayesian model.
Vickerstaff, Robert J; Merkle, Tobias
2012-08-21
The systematic search behaviour is a backup system that increases the chances of desert ants finding their nest entrance after foraging when the path integrator has failed to guide them home accurately enough. Here we present a mathematical model of the systematic search that is based on extensive behavioural studies in North African desert ants Cataglyphis fortis. First, a simple search heuristic utilising Bayesian inference and a probability density function is developed. This model, which optimises the short-term nest detection probability, is then compared to three simpler search heuristics and to recorded search patterns of Cataglyphis ants. To compare the different searches a method to quantify search efficiency is established as well as an estimate of the error rate in the ants' path integrator. We demonstrate that the Bayesian search heuristic is able to automatically adapt to increasing levels of positional uncertainty to produce broader search patterns, just as desert ants do, and that it outperforms the three other search heuristics tested. The searches produced by it are also arguably the most similar in appearance to the ant's searches. Copyright © 2012 Elsevier Ltd. All rights reserved.
Pehora, Carolyne; Gajaria, Nisha; Stoute, Melyssa; Fracassa, Sonia; Serebale-O'Sullivan, Refilwe; Matava, Clyde T
2015-06-22
The use of the Internet to search for medical and health-related information is increasing and associated with concerns around quality and safety. We investigated the current use and perceptions on reliable websites for children's health information by parents. Following institutional ethics approval, we conducted a survey of parents/guardians of children presenting for day surgery. A 20-item survey instrument developed and tested by the investigators was administered. Ninety-eight percent of respondents reported that they used the Internet to search for information about their child's health. Many respondents reported beginning their search at public search engines (80%); less than 20% reported starting their search at university/hospital-based websites. Common conditions such as colds/flu, skin conditions and fever were the most frequently searched, and unique conditions directly affecting the child were second. Despite low usage levels of university/hospital-based websites for health information, the majority of respondents (74%) regarded these as providing safe, accurate, and reliable information. In contrast, only 24% of respondents regarded public search engines as providing safe and reliable information. Fifty percent of respondents reported that they cross-checked information found on the internet with a family physician. An unprecedented majority of parents and guardians are using the Internet for their child's health information. Of concern is that parents and guardians are currently not using reliable and safe sources of information. Health care providers should begin to focus on improving access to safe, accurate, and reliable information through various modalities including education, designing for multiplatform, and better search engine optimization.
Manríquez, Juan J
2008-04-01
Systematic reviews should include as many articles as possible. However, many systematic reviews use only databases with high English language content as sources of trials. Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) is an underused source of trials, and there is not a validated strategy for searching clinical trials to be used in this database. The objective of this study was to develop a sensitive search strategy for clinical trials in LILACS. An analytical survey was performed. Several single and multiple-term search strategies were tested for their ability to retrieve clinical trials in LILACS. Sensitivity, specificity, and accuracy of each single and multiple-term strategy were calculated using the results of a hand-search of 44 Chilean journals as gold standard. After combining the most sensitive, specific, and accurate single and multiple-term search strategy, a strategy with a sensitivity of 97.75% (95% confidence interval [CI]=95.98-99.53) and a specificity of 61.85 (95% CI=61.19-62.51) was obtained. LILACS is a source of trials that could improve systematic reviews. A new highly sensitive search strategy for clinical trials in LILACS has been developed. It is hoped this search strategy will improve and increase the utilization of LILACS in future systematic reviews.
Playing shooter and driving videogames improves top-down guidance in visual search.
Wu, Sijing; Spence, Ian
2013-05-01
Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills.
Application of kernel functions for accurate similarity search in large chemical databases.
Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H
2010-04-29
Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.
A literature search tool for intelligent extraction of disease-associated genes.
Jung, Jae-Yoon; DeLuca, Todd F; Nelson, Tristan H; Wall, Dennis P
2014-01-01
To extract disorder-associated genes from the scientific literature in PubMed with greater sensitivity for literature-based support than existing methods. We developed a PubMed query to retrieve disorder-related, original research articles. Then we applied a rule-based text-mining algorithm with keyword matching to extract target disorders, genes with significant results, and the type of study described by the article. We compared our resulting candidate disorder genes and supporting references with existing databases. We demonstrated that our candidate gene set covers nearly all genes in manually curated databases, and that the references supporting the disorder-gene link are more extensive and accurate than other general purpose gene-to-disorder association databases. We implemented a novel publication search tool to find target articles, specifically focused on links between disorders and genotypes. Through comparison against gold-standard manually updated gene-disorder databases and comparison with automated databases of similar functionality we show that our tool can search through the entirety of PubMed to extract the main gene findings for human diseases rapidly and accurately.
Mining Social Media and Web Searches For Disease Detection
Yang, Y. Tony; Horneffer, Michael; DiLisio, Nicole
2013-01-01
Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate. PMID:25170475
Mining social media and web searches for disease detection.
Yang, Y Tony; Horneffer, Michael; DiLisio, Nicole
2013-04-28
Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate.
A Semantic Approach for Knowledge Discovery to Help Mitigate Habitat Loss in the Gulf of Mexico
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Maskey, M.; Graves, S.; Hardin, D.
2008-12-01
Noesis is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities. Ontologies enable Noesis to help users refine their searches for information on the open web and in hidden web locations such as data catalogues with standardized, but discipline specific vocabularies. Through its ontologies Noesis provides a guided refinement of search queries which produces complete and accurate searches while reducing the user's burden to experiment with different search strings. All search results are organized by categories (e. g. all results from Google are grouped together) which may be selected or omitted according to the desire of the user. During the past two years ontologies were developed for sea grasses in the Gulf of Mexico and were used to support a habitat restoration demonstration project. Currently these ontologies are being augmented to address the special characteristics of mangroves. These new ontologies will extend the demonstration project to broader regions of the Gulf including protected mangrove locations in coastal Mexico. Noesis contributes to the decision making process by producing a comprehensive list of relevant resources based on the semantic information contained in the ontologies. Ontologies are organized in a tree like taxonomies, where the child nodes represent the Specializations and the parent nodes represent the Generalizations of a node or concept. Specializations can be used to provide more detailed search, while generalizations are used to make the search broader. Ontologies are also used to link two syntactically different terms to one semantic concept (synonyms). Appending a synonym to the query expands the search, thus providing better search coverage. Every concept has a set of properties that are neither in the same inheritance hierarchy (Specializations / Generalizations) nor equivalent (synonyms). These are called Related Concepts and they are captured in the ontology through property relationships. By using Related Concepts users can search for resources with respect to a particular property. Noesis automatically generates searches that include all of these capabilities, removing the burden from the user and producing broader and more accurate search results. This presentation will demonstrate the features of Noesis and describe its application to habitat studies in the Gulf of Mexico.
Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663
Ren, Kun; Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.
Noesis: Ontology based Scoped Search Engine and Resource Aggregator for Atmospheric Science
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Movva, S.; Li, X.; Cherukuri, P.; Graves, S.
2006-12-01
The goal for search engines is to return results that are both accurate and complete. The search engines should find only what you really want and find everything you really want. Search engines (even meta search engines) lack semantics. The basis for search is simply based on string matching between the user's query term and the resource database and the semantics associated with the search string is not captured. For example, if an atmospheric scientist is searching for "pressure" related web resources, most search engines return inaccurate results such as web resources related to blood pressure. In this presentation Noesis, which is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities will be described. Noesis uses domain ontologies to help the user scope the search query to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. Noesis also serves as a resource aggregator. It categorizes the search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user.
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.
Dong, Runze; Pan, Shuo; Peng, Zhenling; Zhang, Yang; Yang, Jianyi
2018-05-21
With the rapid increase of the number of protein structures in the Protein Data Bank, it becomes urgent to develop algorithms for efficient protein structure comparisons. In this article, we present the mTM-align server, which consists of two closely related modules: one for structure database search and the other for multiple structure alignment. The database search is speeded up based on a heuristic algorithm and a hierarchical organization of the structures in the database. The multiple structure alignment is performed using the recently developed algorithm mTM-align. Benchmark tests demonstrate that our algorithms outperform other peering methods for both modules, in terms of speed and accuracy. One of the unique features for the server is the interplay between database search and multiple structure alignment. The server provides service not only for performing fast database search, but also for making accurate multiple structure alignment with the structures found by the search. For the database search, it takes about 2-5 min for a structure of a medium size (∼300 residues). For the multiple structure alignment, it takes a few seconds for ∼10 structures of medium sizes. The server is freely available at: http://yanglab.nankai.edu.cn/mTM-align/.
Chemical-text hybrid search engines.
Zhou, Yingyao; Zhou, Bin; Jiang, Shumei; King, Frederick J
2010-01-01
As the amount of chemical literature increases, it is critical that researchers be enabled to accurately locate documents related to a particular aspect of a given compound. Existing solutions, based on text and chemical search engines alone, suffer from the inclusion of "false negative" and "false positive" results, and cannot accommodate diverse repertoire of formats currently available for chemical documents. To address these concerns, we developed an approach called Entity-Canonical Keyword Indexing (ECKI), which converts a chemical entity embedded in a data source into its canonical keyword representation prior to being indexed by text search engines. We implemented ECKI using Microsoft Office SharePoint Server Search, and the resultant hybrid search engine not only supported complex mixed chemical and keyword queries but also was applied to both intranet and Internet environments. We envision that the adoption of ECKI will empower researchers to pose more complex search questions that were not readily attainable previously and to obtain answers at much improved speed and accuracy.
Shah, Jai L.; Tandon, Neeraj; Keshavan, Matcheri S.
2016-01-01
Aim Accurate prediction of which individuals will go on to develop psychosis would assist early intervention and prevention paradigms. We sought to review investigations of prospective psychosis prediction based on markers and variables examined in longitudinal familial high-risk (FHR) studies. Methods We performed literature searches in MedLine, PubMed and PsycINFO for articles assessing performance characteristics of predictive clinical tests in FHR studies of psychosis. Studies were included if they reported one or more predictive variables in subjects at FHR for psychosis. We complemented this search strategy with references drawn from articles, reviews, book chapters and monographs. Results Across generations of familial high-risk projects, predictive studies have investigated behavioral, cognitive, psychometric, clinical, neuroimaging, and other markers. Recent analyses have incorporated multivariate and multi-domain approaches to risk ascertainment, although with still generally modest results. Conclusions While a broad range of risk factors has been identified, no individual marker or combination of markers can at this time enable accurate prospective prediction of emerging psychosis for individuals at FHR. We outline the complex and multi-level nature of psychotic illness, the myriad of factors influencing its development, and methodological hurdles to accurate and reliable prediction. Prospects and challenges for future generations of FHR studies are discussed in the context of early detection and intervention strategies. PMID:23693118
Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.
Demelo, Jonathan; Parsons, Paul; Sedig, Kamran
2017-02-02
Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. ©Jonathan Demelo, Paul Parsons, Kamran Sedig. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 02.02.2017.
Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE
2017-01-01
Background Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Objective Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. Methods We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Results Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Conclusions Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. PMID:28153818
Energy Consumption Forecasting Using Semantic-Based Genetic Programming with Local Search Optimizer.
Castelli, Mauro; Trujillo, Leonardo; Vanneschi, Leonardo
2015-01-01
Energy consumption forecasting (ECF) is an important policy issue in today's economies. An accurate ECF has great benefits for electric utilities and both negative and positive errors lead to increased operating costs. The paper proposes a semantic based genetic programming framework to address the ECF problem. In particular, we propose a system that finds (quasi-)perfect solutions with high probability and that generates models able to produce near optimal predictions also on unseen data. The framework blends a recently developed version of genetic programming that integrates semantic genetic operators with a local search method. The main idea in combining semantic genetic programming and a local searcher is to couple the exploration ability of the former with the exploitation ability of the latter. Experimental results confirm the suitability of the proposed method in predicting the energy consumption. In particular, the system produces a lower error with respect to the existing state-of-the art techniques used on the same dataset. More importantly, this case study has shown that including a local searcher in the geometric semantic genetic programming system can speed up the search process and can result in fitter models that are able to produce an accurate forecasting also on unseen data.
Oriented regions grouping based candidate proposal for infrared pedestrian detection
NASA Astrophysics Data System (ADS)
Wang, Jiangtao; Zhang, Jingai; Li, Huaijiang
2018-04-01
Effectively and accurately locating the positions of pedestrian candidates in image is a key task for the infrared pedestrian detection system. In this work, a novel similarity measuring metric is designed. Based on the selective search scheme, the developed similarity measuring metric is utilized to yield the possible locations for pedestrian candidate. Besides this, corresponding diversification strategies are also provided according to the characteristics of the infrared thermal imaging system. Experimental results indicate that the presented scheme can achieve more efficient outputs than the traditional selective search methodology for the infrared pedestrian detection task.
NASA Astrophysics Data System (ADS)
Jaranowski, Piotr; Królak, Andrzej
2000-03-01
We develop the analytic and numerical tools for data analysis of the continuous gravitational-wave signals from spinning neutron stars for ground-based laser interferometric detectors. The statistical data analysis method that we investigate is maximum likelihood detection which for the case of Gaussian noise reduces to matched filtering. We study in detail the statistical properties of the optimum functional that needs to be calculated in order to detect the gravitational-wave signal and estimate its parameters. We find it particularly useful to divide the parameter space into elementary cells such that the values of the optimal functional are statistically independent in different cells. We derive formulas for false alarm and detection probabilities both for the optimal and the suboptimal filters. We assess the computational requirements needed to do the signal search. We compare a number of criteria to build sufficiently accurate templates for our data analysis scheme. We verify the validity of our concepts and formulas by means of the Monte Carlo simulations. We present algorithms by which one can estimate the parameters of the continuous signals accurately. We find, confirming earlier work of other authors, that given a 100 Gflops computational power an all-sky search for observation time of 7 days and directed search for observation time of 120 days are possible whereas an all-sky search for 120 days of observation time is computationally prohibitive.
Rapid development of Proteomic applications with the AIBench framework.
López-Fernández, Hugo; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Méndez Reboredo, José R; Santos, Hugo M; Carreira, Ricardo J; Capelo-Martínez, José L; Fdez-Riverola, Florentino
2011-09-15
In this paper we present two case studies of Proteomics applications development using the AIBench framework, a Java desktop application framework mainly focused in scientific software development. The applications presented in this work are Decision Peptide-Driven, for rapid and accurate protein quantification, and Bacterial Identification, for Tuberculosis biomarker search and diagnosis. Both tools work with mass spectrometry data, specifically with MALDI-TOF spectra, minimizing the time required to process and analyze the experimental data. Copyright 2011 The Author(s). Published by Journal of Integrative Bioinformatics.
Biggs, Adam T; Mitroff, Stephen R
2014-01-01
Visual search, locating target items among distractors, underlies daily activities ranging from critical tasks (e.g., looking for dangerous objects during security screening) to commonplace ones (e.g., finding your friends in a crowded bar). Both professional and nonprofessional individuals conduct visual searches, and the present investigation is aimed at understanding how they perform similarly and differently. We administered a multiple-target visual search task to both professional (airport security officers) and nonprofessional participants (members of the Duke University community) to determine how search abilities differ between these populations and what factors might predict accuracy. There were minimal overall accuracy differences, although the professionals were generally slower to respond. However, the factors that predicted accuracy varied drastically between groups; variability in search consistency-how similarly an individual searched from trial to trial in terms of speed-best explained accuracy for professional searchers (more consistent professionals were more accurate), whereas search speed-how long an individual took to complete a search when no targets were present-best explained accuracy for nonprofessional searchers (slower nonprofessionals were more accurate). These findings suggest that professional searchers may utilize different search strategies from those of nonprofessionals, and that search consistency, in particular, may provide a valuable tool for enhancing professional search accuracy.
Semantically Enriching the Search System of a Music Digital Library
NASA Astrophysics Data System (ADS)
de Juan, Paloma; Iglesias, Carlos
Traditional search systems are usually based on keywords, a very simple and convenient mechanism to express a need for information. This is the most popular way of searching the Web, although it is not always an easy task to accurately summarize a natural language query in a few keywords. Working with keywords means losing the context, which is the only thing that can help us deal with ambiguity. This is the biggest problem of keyword-based systems. Semantic Web technologies seem a perfect solution to this problem, since they make it possible to represent the semantics of a given domain. In this chapter, we present three projects, Harmos, Semusici and Cantiga, whose aim is to provide access to a music digital library. We will describe two search systems, a traditional one and a semantic one, developed in the context of these projects and compare them in terms of usability and effectiveness.
Nowcasting Intraseasonal Recreational Fishing Harvest with Internet Search Volume
Carter, David W.; Crosson, Scott; Liese, Christopher
2015-01-01
Estimates of recreational fishing harvest are often unavailable until after a fishing season has ended. This lag in information complicates efforts to stay within the quota. The simplest way to monitor quota within the season is to use harvest information from the previous year. This works well when fishery conditions are stable, but is inaccurate when fishery conditions are changing. We develop regression-based models to “nowcast” intraseasonal recreational fishing harvest in the presence of changing fishery conditions. Our basic model accounts for seasonality, changes in the fishing season, and important events in the fishery. Our extended model uses Google Trends data on the internet search volume relevant to the fishery of interest. We demonstrate the model with the Gulf of Mexico red snapper fishery where the recreational sector has exceeded the quota nearly every year since 2007. Our results confirm that data for the previous year works well to predict intraseasonal harvest for a year (2012) where fishery conditions are consistent with historic patterns. However, for a year (2013) of unprecedented harvest and management activity our regression model using search volume for the term “red snapper season” generates intraseasonal nowcasts that are 27% more accurate than the basic model without the internet search information and 29% more accurate than the prediction based on the previous year. Reliable nowcasts of intraseasonal harvest could make in-season (or in-year) management feasible and increase the likelihood of staying within quota. Our nowcasting approach using internet search volume might have the potential to improve quota management in other fisheries where conditions change year-to-year. PMID:26348645
Accurate estimation of influenza epidemics using Google search data via ARGO
Yang, Shihao; Santillana, Mauricio; Kou, S. C.
2015-01-01
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
NASA Technical Reports Server (NTRS)
1985-01-01
Exactatron, an accurate weighing and spotting system in bowling ball manufacture, was developed by Ebonite International engineers with the assistance of a NASA computer search which identified Jet Propulsion Laboratory (JPL) technology. The JPL research concerned a means of determining the center of an object's mass, and an apparatus for measuring liquid viscosity, enabling Ebonite to identify the exact spotting of the drilling point for top weighting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarolli, Jay G.; Naes, Benjamin E.; Butler, Lamar
A fully convolutional neural network (FCN) was developed to supersede automatic or manual thresholding algorithms used for tabulating SIMS particle search data. The FCN was designed to perform a binary classification of pixels in each image belonging to a particle or not, thereby effectively removing background signal without manually or automatically determining an intensity threshold. Using 8,000 images from 28 different particle screening analyses, the FCN was trained to accurately predict pixels belonging to a particle with near 99% accuracy. Background eliminated images were then segmented using a watershed technique in order to determine isotopic ratios of particles. A comparisonmore » of the isotopic distributions of an independent data set segmented using the neural network, compared to a commercially available automated particle measurement (APM) program developed by CAMECA, highlighted the necessity for effective background removal to ensure that resulting particle identification is not only accurate, but preserves valuable signal that could be lost due to improper segmentation. The FCN approach improves the robustness of current state-of-the-art particle searching algorithms by reducing user input biases, resulting in an improved absolute signal per particle and decreased uncertainty of the determined isotope ratios.« less
Conjugate-gradient optimization method for orbital-free density functional calculations.
Jiang, Hong; Yang, Weitao
2004-08-01
Orbital-free density functional theory as an extension of traditional Thomas-Fermi theory has attracted a lot of interest in the past decade because of developments in both more accurate kinetic energy functionals and highly efficient numerical methodology. In this paper, we developed a conjugate-gradient method for the numerical solution of spin-dependent extended Thomas-Fermi equation by incorporating techniques previously used in Kohn-Sham calculations. The key ingredient of the method is an approximate line-search scheme and a collective treatment of two spin densities in the case of spin-dependent extended Thomas-Fermi problem. Test calculations for a quartic two-dimensional quantum dot system and a three-dimensional sodium cluster Na216 with a local pseudopotential demonstrate that the method is accurate and efficient. (c) 2004 American Institute of Physics.
Skirton, Heather; Goldsmith, Lesley; Jackson, Leigh; Lewis, Celine; Chitty, Lyn S
2015-12-01
The development of non-invasive prenatal testing has increased accessibility of fetal testing. Companies are now advertising prenatal testing for aneuploidy via the Internet. The aim of this systematic review of websites advertising non-invasive prenatal testing for aneuploidy was to explore the nature of the information being provided to potential users. We systematically searched two Internet search engines for relevant websites using the following terms: 'prenatal test', 'antenatal test', 'non-invasive test', 'noninvasive test', 'cell-free fetal DNA', 'cffDNA', 'Down syndrome test' or 'trisomy test'. We examined the first 200 websites identified through each search. Relevant web-based text was examined, and key topics were identified, tabulated and counted. To analyse the text further, we used thematic analysis. Forty websites were identified. Whilst a number of sites provided balanced, accurate information, in the majority supporting evidence was not provided to underpin the information and there was inadequate information on the need for an invasive test to definitely diagnose aneuploidy. The information provided on many websites does not comply with professional recommendations. Guidelines are needed to ensure that companies offering prenatal testing via the Internet provide accurate and comprehensible information. © 2015 John Wiley & Sons, Ltd.
Developing a dengue forecast model using machine learning: A case study in China.
Guo, Pi; Liu, Tao; Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun
2017-10-01
In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011-2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics.
Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353
Development of a CFD code for casting simulation
NASA Technical Reports Server (NTRS)
Murph, Jesse E.
1992-01-01
The task of developing a computational fluid dynamics (CFD) code to accurately model the mold filling phase of a casting operation was accomplished in a systematic manner. First the state-of-the-art was determined through a literature search, a code search, and participation with casting industry personnel involved in consortium startups. From this material and inputs from industry personnel, an evaluation of the currently available codes was made. It was determined that a few of the codes already contained sophisticated CFD algorithms and further validation of one of these codes could preclude the development of a new CFD code for this purpose. With industry concurrence, ProCAST was chosen for further evaluation. Two benchmark cases were used to evaluate the code's performance using a Silicon Graphics Personal Iris system. The results of these limited evaluations (because of machine and time constraints) are presented along with discussions of possible improvements and recommendations for further evaluation.
FlavonoidSearch: A system for comprehensive flavonoid annotation by mass spectrometry.
Akimoto, Nayumi; Ara, Takeshi; Nakajima, Daisuke; Suda, Kunihiro; Ikeda, Chiaki; Takahashi, Shingo; Muneto, Reiko; Yamada, Manabu; Suzuki, Hideyuki; Shibata, Daisuke; Sakurai, Nozomu
2017-04-28
Currently, in mass spectrometry-based metabolomics, limited reference mass spectra are available for flavonoid identification. In the present study, a database of probable mass fragments for 6,867 known flavonoids (FsDatabase) was manually constructed based on new structure- and fragmentation-related rules using new heuristics to overcome flavonoid complexity. We developed the FlavonoidSearch system for flavonoid annotation, which consists of the FsDatabase and a computational tool (FsTool) to automatically search the FsDatabase using the mass spectra of metabolite peaks as queries. This system showed the highest identification accuracy for the flavonoid aglycone when compared to existing tools and revealed accurate discrimination between the flavonoid aglycone and other compounds. Sixteen new flavonoids were found from parsley, and the diversity of the flavonoid aglycone among different fruits and vegetables was investigated.
Block Architecture Problem with Depth First Search Solution and Its Application
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Abdullah, Dahlan; Simarmata, Janner; Pranolo, Andri; Saleh Ahmar, Ansari; Hidayat, Rahmat; Napitupulu, Darmawan; Nurdiyanto, Heri; Febriadi, Bayu; Zamzami, Z.
2018-01-01
Searching is a common process performed by many computer users, Raita algorithm is one algorithm that can be used to match and find information in accordance with the patterns entered. Raita algorithm applied to the file search application using java programming language and the results obtained from the testing process of the file search quickly and with accurate results and support many data types.
Searching Process with Raita Algorithm and its Application
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Saleh Ahmar, Ansari; Abdullah, Dahlan; Hartama, Dedy; Napitupulu, Darmawan; Putera Utama Siahaan, Andysah; Hasan Siregar, Muhammad Noor; Nasution, Nurliana; Sundari, Siti; Sriadhi, S.
2018-04-01
Searching is a common process performed by many computer users, Raita algorithm is one algorithm that can be used to match and find information in accordance with the patterns entered. Raita algorithm applied to the file search application using java programming language and the results obtained from the testing process of the file search quickly and with accurate results and support many data types.
Finding accurate frontiers: A knowledge-intensive approach to relational learning
NASA Technical Reports Server (NTRS)
Pazzani, Michael; Brunk, Clifford
1994-01-01
An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.
Visual search and emotion: how children with autism spectrum disorders scan emotional scenes.
Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J; Casagrande, Maria
2014-11-01
This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either emotion-laden (positive or negative) or neutral real-world pictures. The results showed that emotion-laden pictures equally interfered with performance of both ASD and TD children, slowing down reaction times compared with neutral pictures. Children with ASD were faster than TD children, particularly in detecting changes in MI objects, the most difficult condition. However, their performance was less accurate than performance of TD children just when the pictures were negative. These findings suggest that children with ASD have better visual search abilities than TD children only when the search is particularly difficult and requires strong serial search strategies. The emotional-social impairment that is usually considered as a typical feature of ASD seems to be limited to processing of negative emotional information.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.
Guo, Shengwen; Fei, Baowei
2009-03-27
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A minimal path searching approach for active shape model (ASM)-based segmentation of the lung
NASA Astrophysics Data System (ADS)
Guo, Shengwen; Fei, Baowei
2009-02-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung
Guo, Shengwen; Fei, Baowei
2013-01-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531
Foliar and woody materials discriminated using terrestrial LiDAR in a mixed natural forest
NASA Astrophysics Data System (ADS)
Zhu, Xi; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Niemann, K. Olaf; Liu, Jing; Shi, Yifang; Wang, Tiejun
2018-02-01
Separation of foliar and woody materials using remotely sensed data is crucial for the accurate estimation of leaf area index (LAI) and woody biomass across forest stands. In this paper, we present a new method to accurately separate foliar and woody materials using terrestrial LiDAR point clouds obtained from ten test sites in a mixed forest in Bavarian Forest National Park, Germany. Firstly, we applied and compared an adaptive radius near-neighbor search algorithm with a fixed radius near-neighbor search method in order to obtain both radiometric and geometric features derived from terrestrial LiDAR point clouds. Secondly, we used a random forest machine learning algorithm to classify foliar and woody materials and examined the impact of understory and slope on the classification accuracy. An average overall accuracy of 84.4% (Kappa = 0.75) was achieved across all experimental plots. The adaptive radius near-neighbor search method outperformed the fixed radius near-neighbor search method. The classification accuracy was significantly higher when the combination of both radiometric and geometric features was utilized. The analysis showed that increasing slope and understory coverage had a significant negative effect on the overall classification accuracy. Our results suggest that the utilization of the adaptive radius near-neighbor search method coupling both radiometric and geometric features has the potential to accurately discriminate foliar and woody materials from terrestrial LiDAR data in a mixed natural forest.
Seeking out SARI: an automated search of electronic health records.
O'Horo, John C; Dziadzko, Mikhail; Sakusic, Amra; Ali, Rashid; Sohail, M Rizwan; Kor, Daryl J; Gajic, Ognjen
2018-06-01
The definition of severe acute respiratory infection (SARI) - a respiratory illness with fever and cough, occurring within the past 10 days and requiring hospital admission - has not been evaluated for critically ill patients. Using integrated electronic health records data, we developed an automated search algorithm to identify SARI cases in a large cohort of critical care patients and evaluate patient outcomes. We conducted a retrospective cohort study of all admissions to a medical intensive care unit from August 2009 through March 2016. Subsets were randomly selected for deriving and validating a search algorithm, which was compared with temporal trends in laboratory-confirmed influenza to ensure that SARI was correlated with influenza. The algorithm was applied to the cohort to identify clinical differences for patients with and without SARI. For identifying SARI, the algorithm (sensitivity, 86.9%; specificity, 95.6%) outperformed billing-based searching (sensitivity, 73.8%; specificity, 78.8%). Automated searching correlated with peaks in laboratory-confirmed influenza. Adjusted for severity of illness, SARI was associated with more hospital, intensive care unit and ventilator days but not with death or dismissal to home. The search algorithm accurately identified SARI for epidemiologic study and surveillance.
Ji, Yanqing; Ying, Hao; Tran, John; Dews, Peter; Massanari, R Michael
2016-07-19
Finding highly relevant articles from biomedical databases is challenging not only because it is often difficult to accurately express a user's underlying intention through keywords but also because a keyword-based query normally returns a long list of hits with many citations being unwanted by the user. This paper proposes a novel biomedical literature search system, called BiomedSearch, which supports complex queries and relevance feedback. The system employed association mining techniques to build a k-profile representing a user's relevance feedback. More specifically, we developed a weighted interest measure and an association mining algorithm to find the strength of association between a query and each concept in the article(s) selected by the user as feedback. The top concepts were utilized to form a k-profile used for the next-round search. BiomedSearch relies on Unified Medical Language System (UMLS) knowledge sources to map text files to standard biomedical concepts. It was designed to support queries with any levels of complexity. A prototype of BiomedSearch software was made and it was preliminarily evaluated using the Genomics data from TREC (Text Retrieval Conference) 2006 Genomics Track. Initial experiment results indicated that BiomedSearch increased the mean average precision (MAP) for a set of queries. With UMLS and association mining techniques, BiomedSearch can effectively utilize users' relevance feedback to improve the performance of biomedical literature search.
Hogenboom, A C; van Leerdam, J A; de Voogt, P
2009-01-16
The European Reach legislation will possibly drive producers to develop newly designed chemicals that will be less persistent, bioaccumulative or toxic. If this innovation leads to an increased use of more hydrophilic chemicals it may result in higher mobilities of chemicals in the aqueous environment. As a result, the drinking water companies may face stronger demands on removal processes as the hydrophilic compounds inherently are more difficult to remove. Monitoring efforts will also experience a shift in focus to more water-soluble compounds. Screening source waters on the presence of (emerging) contaminants is an essential step in the control of the water cycle from source to tap water. In this article, some of our experiences are presented with the hybrid linear ion trap (LTQ) FT Orbitrap mass spectrometer, in the area of chemical water analysis. A two-pronged strategy in mass spectrometric research was employed: (i) exploring effluent, surface, ground- and drinking-water samples searching for accurate masses corresponding to target compounds (and their product ions) known from, e.g. priority lists or the scientific literature and (ii) full-scan screening of water samples in search of 'unknown' or unexpected masses, followed by MS(n) experiments to elucidate the structure of the unknowns. Applications of both approaches to emerging water contaminants are presented and discussed. Results are presented for target analysis search for pharmaceuticals, benzotriazoles, illicit drugs and for the identification of unknown compounds in a groundwater sample and in a polar extract of a landfill soil sample (a toxicity identification evaluation bioassay sample). The applications of accurate mass screening and identification described in this article demonstrate that the LC-LTQ FT Orbitrap MS is well equipped to meet the challenges posed by newly emerging polar contaminants.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-16
..., Genealogy Index Search Request and Genealogy Records Request. The Department of Homeland Security, U.S... the Forms/Collections: Genealogy Index Search Request and Genealogy Records Request. (3) Agency form.... USCIS will use these forms will to facilitate an accurate and timely response to genealogy index search...
Molecular Genetic Testing in Reward Deficiency Syndrome (RDS): Facts and Fiction.
Blum, Kenneth; Badgaiyan, Rajendra D; Agan, Gozde; Fratantonio, James; Simpatico, Thomas; Febo, Marcelo; Haberstick, Brett C; Smolen, Andrew; Gold, Mark S
The Brain Reward Cascade (BRC) is an interaction of neurotransmitters and their respective genes to control the amount of dopamine released within the brain. Any variations within this pathway, whether genetic or environmental (epigenetic), may result in addictive behaviors or RDS, which was coined to define addictive behaviors and their genetic components. To carry out this review we searched a number of important databases including: Filtered: Cochrane Systematic reviews; DARE; Pubmed Central Clinical Quaries; National Guideline Clearinghouse and unfiltered resources: PsychINFO; ACP PIER; PsychSage; Pubmed/Medline. The major search terms included: dopamine agonist therapy for Addiction; dopamine agonist therapy for Reward dependence; dopamine antagonistic therapy for addiction; dopamine antagonistic therapy for reward dependence and neurogenetics of RDS. While there are many studies claiming a genetic association with RDS behavior, not all are scientifically accurate. Albeit our bias, this Clinical Pearl discusses the facts and fictions behind molecular genetic testing in RDS and the significance behind the development of the Genetic Addiction Risk Score (GARS PREDX ™), the first test to accurately predict one's genetic risk for RDS.
Liquid electrolyte informatics using an exhaustive search with linear regression.
Sodeyama, Keitaro; Igarashi, Yasuhiko; Nakayama, Tomofumi; Tateyama, Yoshitaka; Okada, Masato
2018-06-14
Exploring new liquid electrolyte materials is a fundamental target for developing new high-performance lithium-ion batteries. In contrast to solid materials, disordered liquid solution properties have been less studied by data-driven information techniques. Here, we examined the estimation accuracy and efficiency of three information techniques, multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), and exhaustive search with linear regression (ES-LiR), by using coordination energy and melting point as test liquid properties. We then confirmed that ES-LiR gives the most accurate estimation among the techniques. We also found that ES-LiR can provide the relationship between the "prediction accuracy" and "calculation cost" of the properties via a weight diagram of descriptors. This technique makes it possible to choose the balance of the "accuracy" and "cost" when the search of a huge amount of new materials was carried out.
Improving Upon String Methods for Transition State Discovery.
Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker
2012-02-14
Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.
Autonomous Frequency Domain Identification: Theory and Experiment
1989-04-15
4,4,3)). This approach is particularly well suited to provide accurate estimation using sampled-data -3 DO 2 ^ UJ H « > x 2 ^ ui M (d 5 -P m...criteria for resonance requires a unimodal search. Search strategies such as golden search, Fibonacci search etc. are well known and can be found for...identified nonparametrically and a frequency domain de - scription is available, a parametric representation of the transfer function can be found by
Amra, Sakusic; O'Horo, John C; Singh, Tarun D; Wilson, Gregory A; Kashyap, Rahul; Petersen, Ronald; Roberts, Rosebud O; Fryer, John D; Rabinstein, Alejandro A; Gajic, Ognjen
2017-02-01
Long-term cognitive impairment is a common and important problem in survivors of critical illness. We developed electronic search algorithms to identify cognitive impairment and dementia from the electronic medical records (EMRs) that provide opportunity for big data analysis. Eligible patients met 2 criteria. First, they had a formal cognitive evaluation by The Mayo Clinic Study of Aging. Second, they were hospitalized in intensive care unit at our institution between 2006 and 2014. The "criterion standard" for diagnosis was formal cognitive evaluation supplemented by input from an expert neurologist. Using all available EMR data, we developed and improved our algorithms in the derivation cohort and validated them in the independent validation cohort. Of 993 participants who underwent formal cognitive testing and were hospitalized in intensive care unit, we selected 151 participants at random to form the derivation and validation cohorts. The automated electronic search algorithm for cognitive impairment was 94.3% sensitive and 93.0% specific. The search algorithms for dementia achieved respective sensitivity and specificity of 97% and 99%. EMR search algorithms significantly outperformed International Classification of Diseases codes. Automated EMR data extractions for cognitive impairment and dementia are reliable and accurate and can serve as acceptable and efficient alternatives to time-consuming manual data review. Copyright © 2016 Elsevier Inc. All rights reserved.
Detection of scabies: A systematic review of diagnostic methods
Leung, Victor; Miller, Mark
2011-01-01
BACKGROUND: Accurate diagnosis of scabies infection is important for patient treatment and for public health control of scabies epidemics. OBJECTIVE: To systematically review the accuracy and precision of history, physical examination and tests for diagnosing scabies. METHODS: Using a structured search strategy, Medline and Embase databases were searched for English and French language articles that included a diagnosis of scabies. Studies comparing history, physical examination and/or any diagnostic tests with the reference standard of microscopic visualization of mites, eggs or fecal elements obtained from skin scrapings or biopsies were included for analysis. Data were extracted using standard criteria. RESULTS: History and examination of pruritic dermatoses failed to accurately diagnose scabies infection. Dermatoscopy by a trained practitioner has a positive likelihood ratio of 6.5 (95% CI 4.1 to 10.3) and a negative likelihood ratio of 0.1 (95% CI 0.06 to 0.2) for diagnosing scabies. The accuracy of other diagnostic tests could not be calculated from the data in the literature. CONCLUSIONS: In the face of such diagnostic inaccuracy, clinical judgment is still practical in diagnosing scabies. Two tests are used – the burrow ink test and handheld dermatoscopy. The burrow ink test is a simple, rapid, noninvasive test that can be used to screen a large number of patients. Handheld dermatoscopy is an accurate test, but requires special equipment and trained practitioners. Given the morbidity and costs of scabies infection, and that studies to date lack adequate internal and external validity, research to identify or develop accurate diagnostic tests for scabies infection is needed and justifiable. PMID:23205026
Detection of scabies: A systematic review of diagnostic methods.
Leung, Victor; Miller, Mark
2011-01-01
Accurate diagnosis of scabies infection is important for patient treatment and for public health control of scabies epidemics. To systematically review the accuracy and precision of history, physical examination and tests for diagnosing scabies. Using a structured search strategy, Medline and Embase databases were searched for English and French language articles that included a diagnosis of scabies. Studies comparing history, physical examination and/or any diagnostic tests with the reference standard of microscopic visualization of mites, eggs or fecal elements obtained from skin scrapings or biopsies were included for analysis. Data were extracted using standard criteria. History and examination of pruritic dermatoses failed to accurately diagnose scabies infection. Dermatoscopy by a trained practitioner has a positive likelihood ratio of 6.5 (95% CI 4.1 to 10.3) and a negative likelihood ratio of 0.1 (95% CI 0.06 to 0.2) for diagnosing scabies. The accuracy of other diagnostic tests could not be calculated from the data in the literature. In the face of such diagnostic inaccuracy, clinical judgment is still practical in diagnosing scabies. Two tests are used - the burrow ink test and handheld dermatoscopy. The burrow ink test is a simple, rapid, noninvasive test that can be used to screen a large number of patients. Handheld dermatoscopy is an accurate test, but requires special equipment and trained practitioners. Given the morbidity and costs of scabies infection, and that studies to date lack adequate internal and external validity, research to identify or develop accurate diagnostic tests for scabies infection is needed and justifiable.
Slepoy, A; Peters, M D; Thompson, A P
2007-11-30
Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.
Efficient heuristics for maximum common substructure search.
Englert, Péter; Kovács, Péter
2015-05-26
Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.
Chi, Hao; He, Kun; Yang, Bing; Chen, Zhen; Sun, Rui-Xiang; Fan, Sheng-Bo; Zhang, Kun; Liu, Chao; Yuan, Zuo-Fei; Wang, Quan-Hui; Liu, Si-Qi; Dong, Meng-Qiu; He, Si-Min
2015-11-03
Database search is the dominant approach in high-throughput proteomic analysis. However, the interpretation rate of MS/MS spectra is very low in such a restricted mode, which is mainly due to unexpected modifications and irregular digestion types. In this study, we developed a new algorithm called Alioth, to be integrated into the search engine of pFind, for fast and accurate unrestricted database search on high-resolution MS/MS data. An ion index is constructed for both peptide precursors and fragment ions, by which arbitrary digestions and a single site of any modifications and mutations can be searched efficiently. A new re-ranking algorithm is used to distinguish the correct peptide-spectrum matches from random ones. The algorithm is tested on several HCD datasets and the interpretation rate of MS/MS spectra using Alioth is as high as 60%-80%. Peptides from semi- and non-specific digestions, as well as those with unexpected modifications or mutations, can be effectively identified using Alioth and confidently validated using other search engines. The average processing speed of Alioth is 5-10 times faster than some other unrestricted search engines and is comparable to or even faster than the restricted search algorithms tested.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
Fred L. Tobiason; Richard W. Hemingway
1994-01-01
A GMMX conformational search routine gives a family of conformations that reflects the Boltzmann-averaged heterocyclic ring conformation as evidenced by accurate prediction of all three coupling constants observed for tetra-O-methyl-(+)-catechin.
Fred L. Tobiason; Richard w. Hemingway
1994-01-01
A GMMXe conformational search routine gives a family a conformations that reflects the boltzmann-averaged heterocyclic ring conformation as evidence by accurate prediction of all three coupling constants observed for tetra-O-methyl-(+)-catechin.
Optimized stereo matching in binocular three-dimensional measurement system using structured light.
Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong
2014-09-10
In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.
Ku-Band rendezvous radar performance computer simulation model
NASA Technical Reports Server (NTRS)
Magnusson, H. G.; Goff, M. F.
1984-01-01
All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.
Ku-Band rendezvous radar performance computer simulation model
NASA Astrophysics Data System (ADS)
Magnusson, H. G.; Goff, M. F.
1984-06-01
All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.
2014-01-01
In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679
Zhu, Zhikai; Su, Xiaomeng; Go, Eden P; Desaire, Heather
2014-09-16
Glycoproteins are biologically significant large molecules that participate in numerous cellular activities. In order to obtain site-specific protein glycosylation information, intact glycopeptides, with the glycan attached to the peptide sequence, are characterized by tandem mass spectrometry (MS/MS) methods such as collision-induced dissociation (CID) and electron transfer dissociation (ETD). While several emerging automated tools are developed, no consensus is present in the field about the best way to determine the reliability of the tools and/or provide the false discovery rate (FDR). A common approach to calculate FDRs for glycopeptide analysis, adopted from the target-decoy strategy in proteomics, employs a decoy database that is created based on the target protein sequence database. Nonetheless, this approach is not optimal in measuring the confidence of N-linked glycopeptide matches, because the glycopeptide data set is considerably smaller compared to that of peptides, and the requirement of a consensus sequence for N-glycosylation further limits the number of possible decoy glycopeptides tested in a database search. To address the need to accurately determine FDRs for automated glycopeptide assignments, we developed GlycoPep Evaluator (GPE), a tool that helps to measure FDRs in identifying glycopeptides without using a decoy database. GPE generates decoy glycopeptides de novo for every target glycopeptide, in a 1:20 target-to-decoy ratio. The decoys, along with target glycopeptides, are scored against the ETD data, from which FDRs can be calculated accurately based on the number of decoy matches and the ratio of the number of targets to decoys, for small data sets. GPE is freely accessible for download and can work with any search engine that interprets ETD data of N-linked glycopeptides. The software is provided at https://desairegroup.ku.edu/research.
Identifying patients with ischemic heart disease in an electronic medical record.
Ivers, Noah; Pylypenko, Bogdan; Tu, Karen
2011-01-01
Increasing utilization of electronic medical records (EMRs) presents an opportunity to efficiently measure quality indicators in primary care. Achieving this goal requires the development of accurate patient-disease registries. This study aimed to develop and validate an algorithm for identifying patients with ischemic heart disease (IHD) within the EMR. An algorithm was developed to search the unstructured text within the medical history fields in the EMR for IHD-related terminology. This algorithm was applied to a 5% random sample of adult patient charts (n = 969) drawn from a convenience sample of 17 Ontario family physicians. The accuracy of the algorithm for identifying patients with IHD was compared to the results of 3 trained chart abstractors. The manual chart abstraction identified 87 patients with IHD in the random sample (prevalence = 8.98%). The accuracy of the algorithm for identifying patients with IHD was as follows: sensitivity = 72.4% (95% confidence interval [CI]: 61.8-81.5); specificity = 99.3% (95% CI: 98.5-99.8); positive predictive value = 91.3% (95% CI: 82.0-96.7); negative predictive value = 97.3 (95% CI: 96.1-98.3); and kappa = 0.79 (95% CI: 0.72-0.86). Patients with IHD can be accurately identified by applying a search algorithm for the medical history fields in the EMR of primary care providers who were not using standardized approaches to code diagnoses. The accuracy compares favorably to other methods for identifying patients with IHD. The results of this study may aid policy makers, researchers, and clinicians to develop registries and to examine quality indicators for IHD in primary care.
Pillai, Nikhil; Craig, Morgan; Dokoumetzidis, Aristeidis; Schwartz, Sorell L; Bies, Robert; Freedman, Immanuel
2018-06-19
In mathematical pharmacology, models are constructed to confer a robust method for optimizing treatment. The predictive capability of pharmacological models depends heavily on the ability to track the system and to accurately determine parameters with reference to the sensitivity in projected outcomes. To closely track chaotic systems, one may choose to apply chaos synchronization. An advantageous byproduct of this methodology is the ability to quantify model parameters. In this paper, we illustrate the use of chaos synchronization combined with Nelder-Mead search to estimate parameters of the well-known Kirschner-Panetta model of IL-2 immunotherapy from noisy data. Chaos synchronization with Nelder-Mead search is shown to provide more accurate and reliable estimates than Nelder-Mead search based on an extended least squares (ELS) objective function. Our results underline the strength of this approach to parameter estimation and provide a broader framework of parameter identification for nonlinear models in pharmacology. Copyright © 2018 Elsevier Ltd. All rights reserved.
O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin
2017-12-06
Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.
Accuracy of Binary Black Hole Waveform Models for Advanced LIGO
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team
2016-03-01
Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.
Data-driven model-independent searches for long-lived particles at the LHC
NASA Astrophysics Data System (ADS)
Coccaro, Andrea; Curtin, David; Lubatti, H. J.; Russell, Heather; Shelton, Jessie
2016-12-01
Neutral long-lived particles (LLPs) are highly motivated by many beyond the Standard Model scenarios, such as theories of supersymmetry, baryogenesis, and neutral naturalness, and present both tremendous discovery opportunities and experimental challenges for the LHC. A major bottleneck for current LLP searches is the prediction of Standard Model backgrounds, which are often impossible to simulate accurately. In this paper, we propose a general strategy for obtaining differential, data-driven background estimates in LLP searches, thereby notably extending the range of LLP masses and lifetimes that can be discovered at the LHC. We focus on LLPs decaying in the ATLAS muon system, where triggers providing both signal and control samples are available at LHC run 2. While many existing searches require two displaced decays, a detailed knowledge of backgrounds will allow for very inclusive searches that require just one detected LLP decay. As we demonstrate for the h →X X signal model of LLP pair production in exotic Higgs decays, this results in dramatic sensitivity improvements for proper lifetimes ≳10 m . In theories of neutral naturalness, this extends reach to glueball masses far below the b ¯b threshold. Our strategy readily generalizes to other signal models and other detector subsystems. This framework therefore lends itself to the development of a systematic, model-independent LLP search program, in analogy to the highly successful simplified-model framework of prompt searches.
A Text Searching Tool to Identify Patients with Idiosyncratic Drug-Induced Liver Injury.
Heidemann, Lauren; Law, James; Fontana, Robert J
2017-03-01
Idiosyncratic drug-induced liver injury (DILI) is an uncommon but important cause of liver disease that is challenging to diagnose and identify in the electronic medical record (EMR). To develop an accurate, reliable, and efficient method of identifying patients with bonafide DILI in an EMR system. In total, 527,000 outpatient and ER encounters in an EPIC-based EMR were searched for potential DILI cases attributed to eight drugs. A searching algorithm that extracted 200 characters of text around 14 liver injury terms in the EMR were extracted and collated. Physician investigators reviewed the data outputs and used standardized causality assessment methods to adjudicate the potential DILI cases. A total of 101 DILI cases were identified from the 2564 potential DILI cases that included 62 probable DILI cases, 25 possible DILI cases, nine historical DILI cases, and five allergy-only cases. Elimination of the term "liver disease" from the search strategy improved the search recall from 4 to 19 %, while inclusion of the four highest yield liver injury terms further improved the positive predictive value to 64 % but reduced the overall case detection rate by 47 %. RUCAM scores of the 57 probable DILI cases were generally high and concordant with expert opinion causality assessment scores. A novel text searching tool was developed that identified a large number of DILI cases from a widely used EMR system. A computerized extraction of dictated text followed by the manual review of text snippets can rapidly identify bona fide cases of idiosyncratic DILI.
Protein structural similarity search by Ramachandran codes
Lo, Wei-Cheng; Huang, Po-Jung; Chang, Chih-Hung; Lyu, Ping-Chiang
2007-01-01
Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation). SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE) and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era. PMID:17716377
Cuadros-Inostroza, Alvaro; Caldana, Camila; Redestig, Henning; Kusano, Miyako; Lisec, Jan; Peña-Cortés, Hugo; Willmitzer, Lothar; Hannah, Matthew A
2009-12-16
Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.
2009-01-01
Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data. PMID:20015393
The Talent Search Model: Past, Present, and Future
ERIC Educational Resources Information Center
Swiatek, Mary Ann
2007-01-01
Typical standardized achievement tests cannot provide accurate information about gifted students' abilities because they are not challenging enough for such students. Talent searches solve this problem through above-level testing--using tests designed for older students to raise the ceiling for younger, gifted students. Currently, talent search…
Developing a dengue forecast model using machine learning: A case study in China
Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun
2017-01-01
Background In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Methodology/Principal findings Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011–2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. Conclusion and significance The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics. PMID:29036169
NASA Astrophysics Data System (ADS)
Yin, Lucy; Andrews, Jennifer; Heaton, Thomas
2018-05-01
Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.
Nse, Odunaiya; Quinette, Louw; Okechukwu, Ogah
2015-09-01
Well developed and validated lifestyle cardiovascular disease (CVD) risk factors questionnaires is the key to obtaining accurate information to enable planning of CVD prevention program which is a necessity in developing countries. We conducted this review to assess methods and processes used for development and content validation of lifestyle CVD risk factors questionnaires and possibly develop an evidence based guideline for development and content validation of lifestyle CVD risk factors questionnaires. Relevant databases at the Stellenbosch University library were searched for studies conducted between 2008 and 2012, in English language and among humans. Using the following databases; pubmed, cinahl, psyc info and proquest. Search terms used were CVD risk factors, questionnaires, smoking, alcohol, physical activity and diet. Methods identified for development of lifestyle CVD risk factors were; review of literature either systematic or traditional, involvement of expert and /or target population using focus group discussion/interview, clinical experience of authors and deductive reasoning of authors. For validation, methods used were; the involvement of expert panel, the use of target population and factor analysis. Combination of methods produces questionnaires with good content validity and other psychometric properties which we consider good.
Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.
Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie
2018-06-12
Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.
Methods for Documenting Systematic Review Searches: A Discussion of Common Issues
ERIC Educational Resources Information Center
Rader, Tamara; Mann, Mala; Stansfield, Claire; Cooper, Chris; Sampson, Margaret
2014-01-01
Introduction: As standardized reporting requirements for systematic reviews are being adopted more widely, review authors are under greater pressure to accurately record their search process. With careful planning, documentation to fulfill the Preferred Reporting Items for Systematic Reviews and Meta-Analyses requirements can become a valuable…
Accounting for unsearched areas in estimating wind turbine-caused fatality
Huso, Manuela M.P.; Dalthorp, Dan
2014-01-01
With wind energy production expanding rapidly, concerns about turbine-induced bird and bat fatality have grown and the demand for accurate estimation of fatality is increasing. Estimation typically involves counting carcasses observed below turbines and adjusting counts by estimated detection probabilities. Three primary sources of imperfect detection are 1) carcasses fall into unsearched areas, 2) carcasses are removed or destroyed before sampling, and 3) carcasses present in the searched area are missed by observers. Search plots large enough to comprise 100% of turbine-induced fatality are expensive to search and may nonetheless contain areas unsearchable because of dangerous terrain or impenetrable brush. We evaluated models relating carcass density to distance from the turbine to estimate the proportion of carcasses expected to fall in searched areas and evaluated the statistical cost of restricting searches to areas near turbines where carcass density is highest and search conditions optimal. We compared 5 estimators differing in assumptions about the relationship of carcass density to distance from the turbine. We tested them on 6 different carcass dispersion scenarios at each of 3 sites under 2 different search regimes. We found that even simple distance-based carcass-density models were more effective at reducing bias than was a 5-fold expansion of the search area. Estimators incorporating fitted rather than assumed models were least biased, even under restricted searches. Accurate estimates of fatality at wind-power facilities will allow critical comparisons of rates among turbines, sites, and regions and contribute to our understanding of the potential environmental impact of this technology.
Liu, Qiaoxia; Zhou, Binbin; Wang, Xinliang; Ke, Yanxiong; Jin, Yu; Yin, Lihui; Liang, Xinmiao
2012-12-01
A search library about benzylisoquinoline alkaloids was established based on preparation of alkaloid fractions from Rhizoma coptidis, Cortex phellodendri, and Rhizoma corydalis. In this work, two alkaloid fractions from each herbal medicine were first prepared based on selective separation on the "click" binaphthyl column. And then these alkaloid fractions were analyzed on C18 column by liquid chromatography coupled with tandem mass spectrometry. Many structure-related compounds were included in these alkaloids fractions, which led to easy separation and good MS response in further work. Therefore, a search library of 52 benzylisoquinoline alkaloids was established, which included eight aporphine, 19 tetrahydroprotoberberine, two protopine, two benzyltetrahydroisoquinoline, and 21 protoberberine alkaloids. The information of the search library contained compound names, structures, retention times, accurate masses, fragmentation pathways of benzylisoquionline alkaloids, and their sources from three herbal medicines. Using such a library, the alkaloids, especially those trace and unknown components in some herbal medicine could be accurately and quickly identified. In addition, the distribution of benzylisoquinoline alkaloids in the herbal medicines could be also summarized by searching the source samples in the library. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Kumar, S.; Singh, A.; Dhar, A.
2017-08-01
The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.
Sharma, Vivekanand; Holmes, John H; Sarkar, Indra N
2016-08-05
Identify and highlight research issues and methods used in studying Complementary and Alternative Medicine (CAM) information needs, access, and exchange over the Internet. A literature search was conducted using Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines from PubMed to identify articles that have studied Internet use in the CAM context. Additional searches were conducted at Nature.com and Google Scholar. The Internet provides a major medium for attaining CAM information and can also serve as an avenue for conducting CAM related surveys. Based on the literature analyzed in this review, there seems to be significant interest in developing methodologies for identifying CAM treatments, including the analysis of search query data and social media platform discussions. Several studies have also underscored the challenges in developing approaches for identifying the reliability of CAM-related information on the Internet, which may not be supported with reliable sources. The overall findings of this review suggest that there are opportunities for developing approaches for making available accurate information and developing ways to restrict the spread and sale of potentially harmful CAM products and information. Advances in Internet research are yet to be used in context of understanding CAM prevalence and perspectives. Such approaches may provide valuable insights into the current trends and needs in context of CAM use and spread.
Sharma, V.; Holmes, J.H.; Sarkar, I.N.
2016-01-01
SUMMARY Objective Identify and highlight research issues and methods used in studying Complementary and Alternative Medicine (CAM) information needs, access, and exchange over the Internet. Methods A literature search was conducted using Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines from PubMed to identify articles that have studied Internet use in the CAM context. Additional searches were conducted at Nature.com and Google Scholar. Results The Internet provides a major medium for attaining CAM information and can also serve as an avenue for conducting CAM related surveys. Based on the literature analyzed in this review, there seems to be significant interest in developing methodologies for identifying CAM treatments, including the analysis of search query data and social media platform discussions. Several studies have also underscored the challenges in developing approaches for identifying the reliability of CAM-related information on the Internet, which may not be supported with reliable sources. The overall findings of this review suggest that there are opportunities for developing approaches for making available accurate information and developing ways to restrict the spread and sale of potentially harmful CAM products and information. Conclusions Advances in Internet research are yet to be used in context of understanding CAM prevalence and perspectives. Such approaches may provide valuable insights into the current trends and needs in context of CAM use and spread. PMID:27352304
Model-based color halftoning using direct binary search.
Agar, A Ufuk; Allebach, Jan P
2005-12-01
In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.
Multi-Robot Search for a Moving Target: Integrating World Modeling, Task Assignment and Context
2016-12-01
Case Study Our approach to coordination was initially motivated and developed in RoboCup soccer games. In fact, it has been first deployed on a team of...features a rather accurate model of the behavior and capabilities of the humanoid robot in the field. In the soccer case study , our goal is to...on experiments carried out with a team of humanoid robots in a soccer scenario and a team of mobile bases in an office environment. I. INTRODUCTION
Search Strategy to Identify Dental Survival Analysis Articles Indexed in MEDLINE.
Layton, Danielle M; Clarke, Michael
2016-01-01
Articles reporting survival outcomes (time-to-event outcomes) in patients over time are challenging to identify in the literature. Research shows the words authors use to describe their dental survival analyses vary, and that allocation of medical subject headings by MEDLINE indexers is inconsistent. Together, this undermines accurate article identification. The present study aims to develop and validate a search strategy to identify dental survival analyses indexed in MEDLINE (Ovid). A gold standard cohort of articles was identified to derive the search terms, and an independent gold standard cohort of articles was identified to test and validate the proposed search strategies. The first cohort included all 6,955 articles published in the 50 dental journals with the highest impact factors in 2008, of which 95 articles were dental survival articles. The second cohort included all 6,514 articles published in the 50 dental journals with the highest impact factors for 2012, of which 148 were dental survival articles. Each cohort was identified by a systematic hand search. Performance parameters of sensitivity, precision, and number needed to read (NNR) for the search strategies were calculated. Sensitive, precise, and optimized search strategies were developed and validated. The performances of the search strategy maximizing sensitivity were 92% sensitivity, 14% precision, and 7.11 NNR; the performances of the strategy maximizing precision were 93% precision, 10% sensitivity, and 1.07 NNR; and the performances of the strategy optimizing the balance between sensitivity and precision were 83% sensitivity, 24% precision, and 4.13 NNR. The methods used to identify search terms were objective, not subjective. The search strategies were validated in an independent group of articles that included different journals and different publication years. Across the three search strategies, dental survival articles can be identified with sensitivity up to 92%, precision up to 93%, and NNR of less than two articles to identify relevant records. This research has highlighted the impact that variation in reporting and indexing has on article identification and has improved researchers' ability to identify dental survival articles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Jianwei; Remsing, Richard C.; Zhang, Yubo
2016-06-13
One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and vanmore » der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.« less
Sun, Jianwei; Remsing, Richard C; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L; Perdew, John P
2016-09-01
One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.
Supernovae Discovery Efficiency
NASA Astrophysics Data System (ADS)
John, Colin
2018-01-01
Abstract:We present supernovae (SN) search efficiency measurements for recent Hubble Space Telescope (HST) surveys. Efficiency is a key component to any search, and is important parameter as a correction factor for SN rates. To achieve an accurate value for efficiency, many supernovae need to be discoverable in surveys. This cannot be achieved from real SN only, due to their scarcity, so fake SN are planted. These fake supernovae—with a goal of realism in mind—yield an understanding of efficiency based on position related to other celestial objects, and brightness. To improve realism, we built a more accurate model of supernovae using a point-spread function. The next improvement to realism is planting these objects close to galaxies and of various parameters of brightness, magnitude, local galactic brightness and redshift. Once these are planted, a very accurate SN is visible and discoverable by the searcher. It is very important to find factors that affect this discovery efficiency. Exploring the factors that effect detection yields a more accurate correction factor. Further inquires into efficiency give us a better understanding of image processing, searching techniques and survey strategies, and result in an overall higher likelihood to find these events in future surveys with Hubble, James Webb, and WFIRST telescopes. After efficiency is discovered and refined with many unique surveys, it factors into measurements of SN rates versus redshift. By comparing SN rates vs redshift against the star formation rate we can test models to determine how long star systems take from the point of inception to explosion (delay time distribution). This delay time distribution is compared to SN progenitors models to get an accurate idea of what these stars were like before their deaths.
Googling endometriosis: a systematic review of information available on the Internet.
Hirsch, Martin; Aggarwal, Shivani; Barker, Claire; Davis, Colin J; Duffy, James M N
2017-05-01
The demand for health information online is increasing rapidly without clear governance. We aim to evaluate the credibility, quality, readability, and accuracy of online patient information concerning endometriosis. We searched 5 popular Internet search engines: aol.com, ask.com, bing.com, google.com, and yahoo.com. We developed a search strategy in consultation with patients with endometriosis, to identify relevant World Wide Web pages. Pages containing information related to endometriosis for women with endometriosis or the public were eligible. Two independent authors screened the search results. World Wide Web pages were evaluated using validated instruments across 3 of the 4 following domains: (1) credibility (White Paper instrument; range 0-10); (2) quality (DISCERN instrument; range 0-85); and (3) readability (Flesch-Kincaid instrument; range 0-100); and (4) accuracy (assessed by a prioritized criteria developed in consultation with health care professionals, researchers, and women with endometriosis based on the European Society of Human Reproduction and Embryology guidelines [range 0-30]). We summarized these data in diagrams, tables, and narratively. We identified 750 World Wide Web pages, of which 54 were included. Over a third of Web pages did not attribute authorship and almost half the included pages did not report the sources of information or academic references. No World Wide Web page provided information assessed as being written in plain English. A minority of web pages were assessed as high quality. A single World Wide Web page provided accurate information: evidentlycochrane.net. Available information was, in general, skewed toward the diagnosis of endometriosis. There were 16 credible World Wide Web pages, however the content limitations were infrequently discussed. No World Wide Web page scored highly across all 4 domains. In the unlikely event that a World Wide Web page reports high-quality, accurate, and credible health information it is typically challenging for a lay audience to comprehend. Health care professionals, and the wider community, should inform women with endometriosis of the risk of outdated, inaccurate, or even dangerous information online. The implementation of an information standard will incentivize providers of online information to establish and adhere to codes of conduct. Copyright © 2016 Elsevier Inc. All rights reserved.
Medical student attitudes towards older people: a critical review of quantitative measures.
Wilson, Mark A G; Kurrle, Susan; Wilson, Ian
2018-01-24
Further research into medical student attitudes towards older people is important, and requires accurate and detailed evaluative methodology. The two objectives for this paper are: (1) From the literature, to critically review instruments of measure for medical student attitudes towards older people, and (2) To recommend the most appropriate quantitative instrument for future research into medical student attitudes towards older people. A SCOPUS and Ovid cross search was performed using the keywords Attitude and medical student and aged or older or elderly. This search was supplemented by manual searching, guided by citations in articles identified by the initial literature search, using the SCOPUS and PubMed databases. International studies quantifying medical student attitudes have demonstrated neutral to positive attitudes towards older people, using various instruments. The most commonly used instruments are the Ageing Semantic Differential (ASD) and the University of California Los Angeles Geriatric Attitudes Scale, with several other measures occasionally used. All instruments used to date have inherent weaknesses. A reliable and valid instrument with which to quantify modern medical student attitudes towards older people has not yet been developed. Adaptation of the ASD for contemporary usage is recommended.
Protein structure database search and evolutionary classification.
Yang, Jinn-Moon; Tung, Chi-Hua
2006-01-01
As more protein structures become available and structural genomics efforts provide structural models in a genome-wide strategy, there is a growing need for fast and accurate methods for discovering homologous proteins and evolutionary classifications of newly determined structures. We have developed 3D-BLAST, in part, to address these issues. 3D-BLAST is as fast as BLAST and calculates the statistical significance (E-value) of an alignment to indicate the reliability of the prediction. Using this method, we first identified 23 states of the structural alphabet that represent pattern profiles of the backbone fragments and then used them to represent protein structure databases as structural alphabet sequence databases (SADB). Our method enhanced BLAST as a search method, using a new structural alphabet substitution matrix (SASM) to find the longest common substructures with high-scoring structured segment pairs from an SADB database. Using personal computers with Intel Pentium4 (2.8 GHz) processors, our method searched more than 10 000 protein structures in 1.3 s and achieved a good agreement with search results from detailed structure alignment methods. [3D-BLAST is available at http://3d-blast.life.nctu.edu.tw].
Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K
2014-09-04
In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.
Chen, Xi; Chen, Huajun; Bi, Xuan; Gu, Peiqin; Chen, Jiaoyan; Wu, Zhaohui
2014-01-01
Understanding the functional mechanisms of the complex biological system as a whole is drawing more and more attention in global health care management. Traditional Chinese Medicine (TCM), essentially different from Western Medicine (WM), is gaining increasing attention due to its emphasis on individual wellness and natural herbal medicine, which satisfies the goal of integrative medicine. However, with the explosive growth of biomedical data on the Web, biomedical researchers are now confronted with the problem of large-scale data analysis and data query. Besides that, biomedical data also has a wide coverage which usually comes from multiple heterogeneous data sources and has different taxonomies, making it hard to integrate and query the big biomedical data. Embedded with domain knowledge from different disciplines all regarding human biological systems, the heterogeneous data repositories are implicitly connected by human expert knowledge. Traditional search engines cannot provide accurate and comprehensive search results for the semantically associated knowledge since they only support keywords-based searches. In this paper, we present BioTCM-SE, a semantic search engine for the information retrieval of modern biology and TCM, which provides biologists with a comprehensive and accurate associated knowledge query platform to greatly facilitate the implicit knowledge discovery between WM and TCM.
Chen, Xi; Chen, Huajun; Bi, Xuan; Gu, Peiqin; Chen, Jiaoyan; Wu, Zhaohui
2014-01-01
Understanding the functional mechanisms of the complex biological system as a whole is drawing more and more attention in global health care management. Traditional Chinese Medicine (TCM), essentially different from Western Medicine (WM), is gaining increasing attention due to its emphasis on individual wellness and natural herbal medicine, which satisfies the goal of integrative medicine. However, with the explosive growth of biomedical data on the Web, biomedical researchers are now confronted with the problem of large-scale data analysis and data query. Besides that, biomedical data also has a wide coverage which usually comes from multiple heterogeneous data sources and has different taxonomies, making it hard to integrate and query the big biomedical data. Embedded with domain knowledge from different disciplines all regarding human biological systems, the heterogeneous data repositories are implicitly connected by human expert knowledge. Traditional search engines cannot provide accurate and comprehensive search results for the semantically associated knowledge since they only support keywords-based searches. In this paper, we present BioTCM-SE, a semantic search engine for the information retrieval of modern biology and TCM, which provides biologists with a comprehensive and accurate associated knowledge query platform to greatly facilitate the implicit knowledge discovery between WM and TCM. PMID:24772189
NASA Astrophysics Data System (ADS)
Gordon, M. K.; Showalter, M. R.; Ballard, L.; Tiscareno, M.; French, R. S.; Olson, D.
2017-06-01
The PDS RMS Node hosts OPUS - an accurate, comprehensive search tool for spacecraft remote sensing observations. OPUS supports Cassini: CIRS, ISS, UVIS, VIMS; New Horizons: LORRI, MVIC; Galileo SSI; Voyager ISS; and Hubble: ACS, STIS, WFC3, WFPC2.
A data base of geologic field spectra
NASA Technical Reports Server (NTRS)
Kahle, A. B.; Goetz, A. F. H.; Paley, H. N.; Alley, R. E.; Abbott, E. A.
1981-01-01
It is noted that field samples measured in the laboratory do not always present an accurate picture of the ground surface sensed by airborne or spaceborne instruments because of the heterogeneous nature of most surfaces and because samples are disturbed and surface characteristics changed by collection and handling. The development of new remote sensing instruments relies on the analysis of surface materials in their natural state. The existence of thousands of Portable Field Reflectance Spectrometer (PFRS) spectra has necessitated a single, all-inclusive data base that permits greatly simplified searching and sorting procedures and facilitates further statistical analyses. The data base developed at JPL for cataloging geologic field spectra is discussed.
Forensic imaging tools for law enforcement
DOE Office of Scientific and Technical Information (OSTI.GOV)
SMITHPETER,COLIN L.; SANDISON,DAVID R.; VARGO,TIMOTHY D.
2000-01-01
Conventional methods of gathering forensic evidence at crime scenes are encumbered by difficulties that limit local law enforcement efforts to apprehend offenders and bring them to justice. Working with a local law-enforcement agency, Sandia National Laboratories has developed a prototype multispectral imaging system that can speed up the investigative search task and provide additional and more accurate evidence. The system, called the Criminalistics Light-imaging Unit (CLU), has demonstrated the capabilities of locating fluorescing evidence at crime scenes under normal lighting conditions and of imaging other types of evidence, such as untreated fingerprints, by direct white-light reflectance. CLU employs state ofmore » the art technology that provides for viewing and recording of the entire search process on videotape. This report describes the work performed by Sandia to design, build, evaluate, and commercialize CLU.« less
The NASA Astrophysics Data System: Capabilities and Roadmap for the 2020s
NASA Astrophysics Data System (ADS)
Accomazzi, Alberto; ADS Team
2018-06-01
The NASA Astrophysics Data System (ADS) is used daily by researchers and curators as a discovery platform for the Astronomy literature. Over the past several years, the ADS has been adding to the breadth and depth of its contents. Scholarly astronomy articles are now indexed as full-text documents, allowing for complete and accurate literature searches. High-level data products, data links, and software used in refereed astronomy papers are now also being ingested and indexed in our database. All the search functionality exposed in the new ADS interface is also available via its API, which we are continuing to develop and enhance. In this talk I will describe the current system, our current roadmap, and solicit input from the community regarding what additional data, services, and discovery capabilities the ADS should support.
Efficient automatic OCR word validation using word partial format derivation and language model
NASA Astrophysics Data System (ADS)
Chen, Siyuan; Misra, Dharitri; Thoma, George R.
2010-01-01
In this paper we present an OCR validation module, implemented for the System for Preservation of Electronic Resources (SPER) developed at the U.S. National Library of Medicine.1 The module detects and corrects suspicious words in the OCR output of scanned textual documents through a procedure of deriving partial formats for each suspicious word, retrieving candidate words by partial-match search from lexicons, and comparing the joint probabilities of N-gram and OCR edit transformation corresponding to the candidates. The partial format derivation, based on OCR error analysis, efficiently and accurately generates candidate words from lexicons represented by ternary search trees. In our test case comprising a historic medico-legal document collection, this OCR validation module yielded the correct words with 87% accuracy and reduced the overall OCR word errors by around 60%.
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
Quality of Online Resources for Pancreatic Cancer Patients.
De Groot, Lauren; Harris, Ilene; Regehr, Glenn; Tekian, Ara; Ingledew, Paris-Ann
2017-10-18
The Internet is increasingly a source of information for pancreatic cancer patients. This disease is usually diagnosed at an advanced stage; therefore, timely access to high-quality information is critical. Our purpose is to systematically evaluate the information available to pancreatic cancer patients on the internet. An internet search using the term "pancreatic cancer" was performed, with the meta-search engines "Dogpile", "Yippy" and "Google". The top 100 websites returned by the search engines were evaluated using a validated structured rating tool. Inter-rater reliability was evaluated using kappa statistics and results were analyzed using descriptive statistics. Amongst the 100 websites evaluated, etiology/risk factors and symptoms were the most accurately covered (70 and 67% of websites). Prevention, treatment and prognosis were the least accurate sections (55, 55 and 43% of websites). Prevention and prognosis were also the least likely to be covered with 63 and 51 websites covering these, respectively. Only 40% of websites identified an author. Twenty-two percent of websites were at a university reading level. The majority of online information is accurate but incomplete. Websites may lack information on prognosis. Many websites are outdated and lacked author information, and readability levels are inappropriate. This knowledge can inform the dialogue between healthcare providers and patients.
From technical jargon to plain English for application.
Lindsley, O R
1991-01-01
These examples of translating technical jargon into plain English application words, acronyms, letter codes, and simple tests were necessary as we developed Precision Teaching. I hope our experience is useful to others facing the problems of applying technology in practical settings. At the least, our experience should give you an idea of the work and time involved in making your own translations. Above all, be patient. Accurate plain English translations do not come easily. They cannot be made at your desk. A search often takes years to produce one new accurate plain English translation. Rapid publication pressures, journal editorial policies, and investments in materials, books, and computer programs all combine to hamper these translations. It's possible that you will find some of our plain English equivalents useful in your own applied behavior analysis applications. PMID:1752836
Accuracy of Information about the Intrauterine Device on the Internet
Madden, Tessa; Cortez, Sarah; Kuzemchak, Marie; Kaphingst, Kimberly A.; Politi, Mary C.
2015-01-01
Background Intrauterine devices (IUDs) are highly effective methods of contraception, but use continues to lag behind less effective methods such as oral contraceptive pills and condoms. Women who are aware of the actual effectiveness of various contraceptive methods are more likely to choose the IUD. Conversely, women who are misinformed about the safety of IUDs may be less like to use this method. Individuals increasingly use the Internet for health information. Information about IUDs obtained through the Internet may influence attitudes about IUD use among patients. Objective Our objective was to evaluate the quality of information about intrauterine devices (IUDs) among websites providing contraceptive information to the public. Study Design We developed a 56-item structured questionnaire to evaluate the quality of information about IUDs available through the Internet. We then conducted an online search to identify websites containing information about contraception and IUDs using common search engines. The search was performed in August 2013 and websites were reviewed in October 2015 to ensure no substantial changes. Results Our search identified over 2000 websites, of which 108 were eligible for review; 105 (97.2%) of these sites contained information about IUDs. Eighty-six percent of sites provided at least one mechanism of the IUD. Most websites accurately reported advantages of the IUD including that it is long-acting (91%), highly effective (82%), and reversible (68%). However, only 30% of sites explicitly indicated that IUDs are safe. Fifty percent of sites (n=53) contained inaccurate information about the IUD such as an increased risk of pelvic inflammatory disease beyond the insertion month (27%) or that women in non-monogamous relationships (30%) and nulliparous women (20%) are not appropriate candidates. Forty-four percent of websites stated that a mechanism of IUDs is prevention of implantation of a fertilized egg. Only 3% of websites incorrectly stated that IUDs are an abortifacient. More than a quarter of sites listed an inaccurate contraindication to the IUD such as nulliparity, history of pelvic inflammatory disease, or history of an ectopic pregnancy. Conclusions The quality of information about IUDs available on the Internet is variable. Accurate information was mixed with inaccurate or outdated information that could perpetuate myths about IUDs. Clinicians need knowledge about accurate,, evidence-based Internet resources to provide to women given the inconsistent quality of information available through online sources. PMID:26546848
Accuracy of information about the intrauterine device on the Internet.
Madden, Tessa; Cortez, Sarah; Kuzemchak, Marie; Kaphingst, Kimberly A; Politi, Mary C
2016-04-01
Intrauterine devices (IUDs) are highly effective methods of contraception, but use continues to lag behind less effective methods such as oral contraceptive pills and condoms. Women who are aware of the actual effectiveness of various contraceptive methods are more likely to choose the IUD. Conversely, women who are misinformed about the safety of IUDs may be less likely to use this method. Individuals increasingly use the Internet for health information. Information about IUDs obtained through the Internet may influence attitudes about IUD use among patients. Our objective was to evaluate the quality of information about IUDs among World Wide Web sites providing contraceptive information to the public. We developed a 56-item structured questionnaire to evaluate the quality of information about IUDs available through the Internet. We then conducted an online search to identify web sites containing information about contraception and IUDs using common search engines. The search was performed in August 2013 and web sites were reviewed again in October 2015 to ensure there were no substantial changes. Our search identified >2000 web sites, of which 108 were eligible for review; 105 (97.2%) of these sites contained information about IUDs. Of sites, 86% provided at least 1 mechanism of the IUD. Most web sites accurately reported advantages of the IUD including that it is long acting (91%), highly effective (82%), and reversible (68%). However, only 30% of sites explicitly indicated that IUDs are safe. Fifty percent (n = 53) of sites contained inaccurate information about the IUD such as an increased risk of pelvic inflammatory disease beyond the insertion month (27%) or that women in nonmonogamous relationships (30%) and nulliparous women (20%) are not appropriate candidates. Among sites, 44% stated that a mechanism of IUDs is prevention of implantation of a fertilized egg. Only 3% of web sites incorrectly stated that IUDs are an abortifacient. More than a quarter of sites listed an inaccurate contraindication to the IUD such as nulliparity, history of pelvic inflammatory disease, or history of an ectopic pregnancy. The quality of information about IUDs available on the Internet is variable. Accurate information was mixed with inaccurate or outdated information that could perpetuate myths about IUDs. Clinicians need knowledge about accurate, evidence-based Internet resources to provide to women given the inconsistent quality of information available through online sources. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
Data Integrity: History, Issues, and Remediation of Issues.
Rattan, Anil K
2018-01-01
Data integrity is critical to regulatory compliance, and the fundamental reason for 21 CFR Part 11 published by the U.S. Food and Drug Administration (FDA). FDA published the first guideline in 1963, and since then FDA and European Union (EU) have published numerous guidelines on various topics related to data integrity for the pharmaceutical industry. Regulators wanted to make certain that industry capture accurate data during the drug development lifecycle and through commercialization-consider the number of warning letters issued lately by inspectors across the globe on data integrity. This article discusses the history of regulations put forward by various regulatory bodies, the term ALCOA Plus adopted by regulators, the impact of not following regulations, and some prevention methods by using some simple checklists, self-audit, and self-inspection techniques. FDA uses the acronym ALCOA to define its expectations of electronic data. ALCOA stands for Attributable, Legible, Contemporaneous, Original, and Accurate. ALCOA was further expanded to ALCOA Plus, and the Plus means Enduring, Available and Accessible, Complete, Consistent, Credible, and Corroborated. If we do not follow the regulations as written, then there is a huge risk. This article covers some of the risk aspects. To prevent data integrity, various solutions can be implemented such as a simple checklist for various systems, self-audit, and self-inspections. To do that we have to develop strategy, people, implement better business processes, and gain a better understanding of data lifecycle as well as technology. LAY ABSTRACT: If one does a Google search on "What is data integrity?" the first page will give the definition of data integrity, how to learn more about data integrity, the history of data integrity, risk management of data integrity, and at the top about various U.S. Food and Drug Administration (FDA) and European Union (EU) regulations. Data integrity is nothing but about accuracy of data. When someone searches Google for some words, we expect accurate results that we can rely on. The same principle applies during the drug development lifecycle. Pharmaceutical industry ensures that data entered for various steps of drug development is accurate so that we can have confidence that the drugs produced by the industry are within some parameters. The regulations put forward by FDA and EU are not new. The first regulation was published in 1963, and after that regulators published multiple guidelines. Inspectors from both regulatory bodies inspected the industry, and they found that the data was not accurate. If pharmaceutical industry produces drugs within the stated parameters, then it is approved and available in the market for patients. If inspectors find that the data is modified, then the drug is not approved. That means revenue loss for industry and drugs not available for patients. In this article, I explain some of the remediation plans for the industry that can be applied during the drug development lifecycle pathway. © PDA, Inc. 2018.
Tree decomposition based fast search of RNA structures including pseudoknots in genomes.
Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming
2005-01-01
Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.
Parity violation in electron scattering
Souder, P.; Paschke, K. D.
2015-12-22
By comparing the cross sections for left- and right-handed electrons scattered from various unpolarized nuclear targets, the small parity-violating asymmetry can be measured. These asymmetry data probe a wide variety of important topics, including searches for new fundamental interactions and important features of nuclear structure that cannot be studied with other probes. A special feature of these experiments is that the results are interpreted with remarkably few theoretical uncertainties, which justifies pushing the experiments to the highest possible precision. To measure the small asymmetries accurately, a number of novel experimental techniques have been developed.
FitSearch: a robust way to interpret a yeast fitness profile in terms of drug's mode-of-action.
Lee, Minho; Han, Sangjo; Chang, Hyeshik; Kwak, Youn-Sig; Weller, David M; Kim, Dongsup
2013-01-01
Yeast deletion-mutant collections have been successfully used to infer the mode-of-action of drugs especially by profiling chemical-genetic and genetic-genetic interactions on a genome-wide scale. Although tens of thousands of those profiles are publicly available, a lack of an accurate method for mining such data has been a major bottleneck for more widespread use of these useful resources. For general usage of those public resources, we designed FitRankDB as a general repository of fitness profiles, and developed a new search algorithm, FitSearch, for identifying the profiles that have a high similarity score with statistical significance for a given fitness profile. We demonstrated that our new repository and algorithm are highly beneficial to researchers who attempting to make hypotheses based on unknown modes-of-action of bioactive compounds, regardless of the types of experiments that have been performed using yeast deletion-mutant collection in various types of different measurement platforms, especially non-chip-based platforms. We showed that our new database and algorithm are useful when attempting to construct a hypothesis regarding the unknown function of a bioactive compound through small-scale experiments with a yeast deletion collection in a platform independent manner. The FitRankDB and FitSearch enhance the ease of searching public yeast fitness profiles and obtaining insights into unknown mechanisms of action of drugs. FitSearch is freely available at http://fitsearch.kaist.ac.kr.
FitSearch: a robust way to interpret a yeast fitness profile in terms of drug's mode-of-action
2013-01-01
Background Yeast deletion-mutant collections have been successfully used to infer the mode-of-action of drugs especially by profiling chemical-genetic and genetic-genetic interactions on a genome-wide scale. Although tens of thousands of those profiles are publicly available, a lack of an accurate method for mining such data has been a major bottleneck for more widespread use of these useful resources. Results For general usage of those public resources, we designed FitRankDB as a general repository of fitness profiles, and developed a new search algorithm, FitSearch, for identifying the profiles that have a high similarity score with statistical significance for a given fitness profile. We demonstrated that our new repository and algorithm are highly beneficial to researchers who attempting to make hypotheses based on unknown modes-of-action of bioactive compounds, regardless of the types of experiments that have been performed using yeast deletion-mutant collection in various types of different measurement platforms, especially non-chip-based platforms. Conclusions We showed that our new database and algorithm are useful when attempting to construct a hypothesis regarding the unknown function of a bioactive compound through small-scale experiments with a yeast deletion collection in a platform independent manner. The FitRankDB and FitSearch enhance the ease of searching public yeast fitness profiles and obtaining insights into unknown mechanisms of action of drugs. FitSearch is freely available at http://fitsearch.kaist.ac.kr. PMID:23368702
Guiding Conformation Space Search with an All-Atom Energy Potential
Brunette, TJ; Brock, Oliver
2009-01-01
The most significant impediment for protein structure prediction is the inadequacy of conformation space search. Conformation space is too large and the energy landscape too rugged for existing search methods to consistently find near-optimal minima. To alleviate this problem, we present model-based search, a novel conformation space search method. Model-based search uses highly accurate information obtained during search to build an approximate, partial model of the energy landscape. Model-based search aggregates information in the model as it progresses, and in turn uses this information to guide exploration towards regions most likely to contain a near-optimal minimum. We validate our method by predicting the structure of 32 proteins, ranging in length from 49 to 213 amino acids. Our results demonstrate that model-based search is more effective at finding low-energy conformations in high-dimensional conformation spaces than existing search methods. The reduction in energy translates into structure predictions of increased accuracy. PMID:18536015
Honekamp, Wilfried; Ostermann, Herwig
2010-01-01
An increasing number of people search for health information online. During the last 10 years various researchers have determined the requirements for an ideal consumer health information system. The aim of this study was to figure out, whether medical laymen can find a more accurate diagnosis for a given anamnesis via the developed prototype health information system than via ordinary internet search. In a randomized controlled trial, the prototype information system was evaluated by the assessment of two sample cases. Participants had to determine the diagnosis of a patient with a headache via information found searching the web. A patient’s history sheet and a computer with internet access were provided to the participants and they were guided through the study by an especially designed study website. The intervention group used the prototype information system; the control group used common search engines and portals. The numbers of correct diagnoses in each group were compared. A total of 140 (60/80) participants took part in two study sections. In the first case, which determined a common diagnosis, both groups did equally well. In the second section, which determined a less common and more complex case, the intervention group did significantly better (P=0.031) due to the tailored information supply. Using medical expert systems in combination with a portal searching meta-search engine represents a feasible strategy to provide reliable patient-tailored information and can ultimately contribute to patient safety with respect to information found via the internet. PMID:20502597
ERIC Educational Resources Information Center
Atkinson, Kayla M.; Koenka, Alison C.; Sanchez, Carmen E.; Moshontz, Hannah; Cooper, Harris
2015-01-01
A complete description of the literature search, including the criteria used for the inclusion of reports after they have been located, used in a research synthesis or meta-analysis is critical if subsequent researchers are to accurately evaluate and reproduce a synthesis' methods and results. Based on previous guidelines and new suggestions, we…
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Lee, Theresa M; Tu, Karen; Wing, Laura L; Gershon, Andrea S
2017-05-15
Little is known about using electronic medical records to identify patients with chronic obstructive pulmonary disease to improve quality of care. Our objective was to develop electronic medical record algorithms that can accurately identify patients with obstructive pulmonary disease. A retrospective chart abstraction study was conducted on data from the Electronic Medical Record Administrative data Linked Database (EMRALD ® ) housed at the Institute for Clinical Evaluative Sciences. Abstracted charts provided the reference standard based on available physician-diagnoses, chronic obstructive pulmonary disease-specific medications, smoking history and pulmonary function testing. Chronic obstructive pulmonary disease electronic medical record algorithms using combinations of terminology in the cumulative patient profile (CPP; problem list/past medical history), physician billing codes (chronic bronchitis/emphysema/other chronic obstructive pulmonary disease), and prescriptions, were tested against the reference standard. Sensitivity, specificity, and positive/negative predictive values (PPV/NPV) were calculated. There were 364 patients with chronic obstructive pulmonary disease identified in a 5889 randomly sampled cohort aged ≥ 35 years (prevalence = 6.2%). The electronic medical record algorithm consisting of ≥ 3 physician billing codes for chronic obstructive pulmonary disease per year; documentation in the CPP; tiotropium prescription; or ipratropium (or its formulations) prescription and a chronic obstructive pulmonary disease billing code had sensitivity of 76.9% (95% CI:72.2-81.2), specificity of 99.7% (99.5-99.8), PPV of 93.6% (90.3-96.1), and NPV of 98.5% (98.1-98.8). Electronic medical record algorithms can accurately identify patients with chronic obstructive pulmonary disease in primary care records. They can be used to enable further studies in practice patterns and chronic obstructive pulmonary disease management in primary care. NOVEL ALGORITHM SEARCH TECHNIQUE: Researchers develop an algorithm that can accurately search through electronic health records to find patients with chronic lung disease. Mining population-wide data for information on patients diagnosed and treated with chronic obstructive pulmonary disease (COPD) in primary care could help inform future healthcare and spending practices. Theresa Lee at the University of Toronto, Canada, and colleagues used an algorithm to search electronic medical records and identify patients with COPD from doctors' notes, prescriptions and symptom histories. They carefully adjusted the algorithm to improve sensitivity and predictive value by adding details such as specific medications, physician codes related to COPD, and different combinations of terminology in doctors' notes. The team accurately identified 364 patients with COPD in a randomly-selected cohort of 5889 people. Their results suggest opportunities for broader, informative studies of COPD in wider populations.
NASA Astrophysics Data System (ADS)
Hoffman, Joseph Loris
1999-11-01
This study examined the information-seeking strategies and science content understandings learners developed as a result of using on-line resources in the University of Michigan Digital Library and on the World Wide Web. Eight pairs of sixth grade students from two teachers' classrooms were observed during inquiries for astronomy, ecology, geology, and weather, and a final transfer task assessed learners' capabilities at the end of the school year. Data included video recordings of students' screen activity and conversations, journals and completed activity sheets, final artifacts, and semi-structured interviews. Learners' information-seeking strategies included activities related to asking, planning, tool usage, searching, assessing, synthesizing, writing, and creating. Analysis of data found a majority of learners posed meaningful, openended questions, used technological tools appropriately, developed pertinent search topics, were thoughtful in queries to the digital library, browsed sites purposefully to locate information, and constructed artifacts with novel formats. Students faced challenges when planning activities, assessing resources, and synthesizing information. Possible explanations were posed linking pedagogical practices with learners' growth and use of inquiry strategies. Data from classroom-lab video and teacher interviews showed varying degrees of student scaffolding: development and critique of initial questions, utilization of search tools, use of journals for reflection on activities, and requirements for final artifacts. Science content understandings included recalling information, offering explanations, articulating relationships, and extending explanations. A majority of learners constructed partial understandings limited to information recall and simple explanations, and these occasionally contained inaccurate conceptualizations. Web site design features had some influence on the construction of learners' content understandings. Analysis of data suggests sites with high quality general design, navigation, and content helped to foster the construction of broad and accurate understandings, while context and interactivity had less impact. However, student engagement with inquiry strategies had a greater impact on the construction of understandings. Gaining accurate and in-depth understandings from on-line resources is a complex process for young learners. Teachers can support students by helping them engage in all phases of the information-seeking process, locate useful information with prescreened resources, build background understanding with off-line instruction, and process new information deeply through extending writing and conversation.
Yom-Tov, Elad; Fernandez-Luque, Luis
2014-01-01
Vaccination campaigns are one of the most important and successful public health programs ever undertaken. People who want to learn about vaccines in order to make an informed decision on whether to vaccinate are faced with a wealth of information on the Internet, both for and against vaccinations. In this paper we develop an automated way to score Internet search queries and web pages as to the likelihood that a person making these queries or reading those pages would decide to vaccinate. We apply this method to data from a major Internet search engine, while people seek information about the Measles, Mumps and Rubella (MMR) vaccine. We show that our method is accurate, and use it to learn about the information acquisition process of people. Our results show that people who are pro-vaccination as well as people who are anti-vaccination seek similar information, but browsing this information has differing effect on their future browsing. These findings demonstrate the need for health authorities to tailor their information according to the current stance of users.
Yom-Tov, Elad; Fernandez-Luque, Luis
2014-01-01
Vaccination campaigns are one of the most important and successful public health programs ever undertaken. People who want to learn about vaccines in order to make an informed decision on whether to vaccinate are faced with a wealth of information on the Internet, both for and against vaccinations. In this paper we develop an automated way to score Internet search queries and web pages as to the likelihood that a person making these queries or reading those pages would decide to vaccinate. We apply this method to data from a major Internet search engine, while people seek information about the Measles, Mumps and Rubella (MMR) vaccine. We show that our method is accurate, and use it to learn about the information acquisition process of people. Our results show that people who are pro-vaccination as well as people who are anti-vaccination seek similar information, but browsing this information has differing effect on their future browsing. These findings demonstrate the need for health authorities to tailor their information according to the current stance of users. PMID:25954435
Phelan, Nigel; Davy, Shane; O'Keeffe, Gerard W; Barry, Denis S
2017-03-01
The role of e-learning platforms in anatomy education continues to expand as self-directed learning is promoted in higher education. Although a wide range of e-learning resources are available, determining student use of non-academic internet resources requires novel approaches. One such approach that may be useful is the Google Trends © web application. To determine the feasibility of Google Trends to gain insights into anatomy-related online searches, Google Trends data from the United States from January 2010 to December 2015 were analyzed. Data collected were based on the recurrence of keywords related to head and neck anatomy generated from the American Association of Clinical Anatomists and the Anatomical Society suggested anatomy syllabi. Relative search volume (RSV) data were analyzed for seasonal periodicity and their overall temporal trends. Following exclusions due to insufficient search volume data, 29 out of 36 search terms were analyzed. Significant seasonal patterns occurred in 23 search terms. Thirty-nine seasonal peaks were identified, mainly in October and April, coinciding with teaching periods in anatomy curricula. A positive correlation of RSV with time over the 6-year study period occurred in 25 out of 29 search terms. These data demonstrate how Google Trends may offer insights into the nature and timing of online search patterns of anatomical syllabi and may potentially inform the development and timing of targeted online supports to ensure that students of anatomy have the opportunity to engage with online content that is both accurate and fit for purpose. Anat Sci Educ 10: 152-159. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.
An Information Infrastructure for Coastal Models and Data
NASA Astrophysics Data System (ADS)
Hardin, D.; Keiser, K.; Conover, H.; Graves, S.
2007-12-01
Advances in semantics and visualization have given rise to new capabilities for the location, manipulation, integration, management and display of data and information in and across domains. An example of these capabilities is illustrated by a coastal restoration project that utilizes satellite, in-situ data and hydrodynamic model output to address seagrass habitat restoration in the Northern Gulf of Mexico. In this project a standard stressor conceptual model was implemented as an ontology in addition to the typical CMAP diagram. The ontology captures the elements of the seagrass conceptual model as well as the relationships between them. Noesis, developed by the University of Alabama in Huntsville, is an application that provides a simple but powerful way to search and organize data and information represented by ontologies. Noesis uses domain ontologies to help scope search queries to ensure that search results are both accurate and complete. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. As a resource aggregator Noesis categorizes search results returned from multiple, concurrent search engines such as Google, Yahoo, and Ask.com. Search results are further directed by accessing domain specific catalogs that include outputs from hydrodynamic and other models. Embedded within the search results are links that invoke applications such as web map displays, animation tools and virtual globe applications such as Google Earth. In the seagrass prioritization project Noesis is used to locate information that is vital to understanding the impact of stressors on the habitat. This presentation will show how the intelligent search capabilities of Noesis are coupled with visualization tools and model output to investigate the restoration of seagrass habitat.
Numerical simulation of magmatic hydrothermal systems
Ingebritsen, S.E.; Geiger, S.; Hurwitz, S.; Driesner, T.
2010-01-01
The dynamic behavior of magmatic hydrothermal systems entails coupled and nonlinear multiphase flow, heat and solute transport, and deformation in highly heterogeneous media. Thus, quantitative analysis of these systems depends mainly on numerical solution of coupled partial differential equations and complementary equations of state (EOS). The past 2 decades have seen steady growth of computational power and the development of numerical models that have eliminated or minimized the need for various simplifying assumptions. Considerable heuristic insight has been gained from process-oriented numerical modeling. Recent modeling efforts employing relatively complete EOS and accurate transport calculations have revealed dynamic behavior that was damped by linearized, less accurate models, including fluid property control of hydrothermal plume temperatures and three-dimensional geometries. Other recent modeling results have further elucidated the controlling role of permeability structure and revealed the potential for significant hydrothermally driven deformation. Key areas for future reSearch include incorporation of accurate EOS for the complete H2O-NaCl-CO2 system, more realistic treatment of material heterogeneity in space and time, realistic description of large-scale relative permeability behavior, and intercode benchmarking comparisons. Copyright 2010 by the American Geophysical Union.
Garcia, Michael; Daugherty, Christopher; Ben Khallouq, Bertha; Maugans, Todd
2018-05-01
OBJECTIVE The Internet is used frequently by patients and family members to acquire information about pediatric neurosurgical conditions. The sources, nature, accuracy, and usefulness of this information have not been examined recently. The authors analyzed the results from searches of 10 common pediatric neurosurgical terms using a novel scoring test to assess the value of the educational information obtained. METHODS Google and Bing searches were performed for 10 common pediatric neurosurgical topics (concussion, craniosynostosis, hydrocephalus, pediatric brain tumor, pediatric Chiari malformation, pediatric epilepsy surgery, pediatric neurosurgery, plagiocephaly, spina bifida, and tethered spinal cord). The first 10 "hits" obtained with each search engine were analyzed using the Currency, Relevance, Authority, Accuracy, and Purpose (CRAAP) test, which assigns a numerical score in each of 5 domains. Agreement between results was assessed for 1) concurrent searches with Google and Bing; 2) Google searches over time (6 months apart); 3) Google searches using mobile and PC platforms concurrently; and 4) searches using privacy settings. Readability was assessed with an online analytical tool. RESULTS Google and Bing searches yielded information with similar CRAAP scores (mean 72% and 75%, respectively), but with frequently differing results (58% concordance/matching results). There was a high level of agreement (72% concordance) over time for Google searches and also between searches using general and privacy settings (92% concordance). Government sources scored the best in both CRAAP score and readability. Hospitals and universities were the most prevalent sources, but these sources had the lowest CRAAP scores, due in part to an abundance of self-marketing. The CRAAP scores for mobile and desktop platforms did not differ significantly (p = 0.49). CONCLUSIONS Google and Bing searches yielded useful educational information, using either mobile or PC platforms. Most information was relevant and accurate; however, the depth and breadth of information was variable. Search results over a 6-month period were moderately stable. Pediatric neurosurgery practices and neurosurgical professional organization websites were inferior (less current, less accurate, less authoritative, and less purposeful) to governmental and encyclopedia-type resources such as Wikipedia. This presents an opportunity for pediatric neurosurgeons to participate in the creation of better online patient/parent educational material.
GeoSearch: A lightweight broking middleware for geospatial resources discovery
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; Liu, K.; Xia, J.
2012-12-01
With petabytes of geodata, thousands of geospatial web services available over the Internet, it is critical to support geoscience research and applications by finding the best-fit geospatial resources from the massive and heterogeneous resources. Past decades' developments witnessed the operation of many service components to facilitate geospatial resource management and discovery. However, efficient and accurate geospatial resource discovery is still a big challenge due to the following reasons: 1)The entry barriers (also called "learning curves") hinder the usability of discovery services to end users. Different portals and catalogues always adopt various access protocols, metadata formats and GUI styles to organize, present and publish metadata. It is hard for end users to learn all these technical details and differences. 2)The cost for federating heterogeneous services is high. To provide sufficient resources and facilitate data discovery, many registries adopt periodic harvesting mechanism to retrieve metadata from other federated catalogues. These time-consuming processes lead to network and storage burdens, data redundancy, and also the overhead of maintaining data consistency. 3)The heterogeneous semantics issues in data discovery. Since the keyword matching is still the primary search method in many operational discovery services, the search accuracy (precision and recall) is hard to guarantee. Semantic technologies (such as semantic reasoning and similarity evaluation) offer a solution to solve these issues. However, integrating semantic technologies with existing service is challenging due to the expandability limitations on the service frameworks and metadata templates. 4)The capabilities to help users make final selection are inadequate. Most of the existing search portals lack intuitive and diverse information visualization methods and functions (sort, filter) to present, explore and analyze search results. Furthermore, the presentation of the value-added additional information (such as, service quality and user feedback), which conveys important decision supporting information, is missing. To address these issues, we prototyped a distributed search engine, GeoSearch, based on brokering middleware framework to search, integrate and visualize heterogeneous geospatial resources. Specifically, 1) A lightweight discover broker is developed to conduct distributed search. The broker retrieves metadata records for geospatial resources and additional information from dispersed services (portals and catalogues) and other systems on the fly. 2) A quality monitoring and evaluation broker (i.e., QoS Checker) is developed and integrated to provide quality information for geospatial web services. 3) The semantic assisted search and relevance evaluation functions are implemented by loosely interoperating with ESIP Testbed component. 4) Sophisticated information and data visualization functionalities and tools are assembled to improve user experience and assist resource selection.
WCSTools 3.0: More Tools for Image Astrometry and Catalog Searching
NASA Astrophysics Data System (ADS)
Mink, Douglas J.
For five years, WCSTools has provided image astrometry for astronomers who need accurate positions for objects they wish to observe. Other functions have been added and improved since the package was first released. Support has been added for new catalogs, such as the GSC-ACT, 2MASS Point Source Catalog, and GSC II, as they have been published. A simple command line interface can search any supported catalog, returning information in several standard formats, whether the catalog is on a local disk or searchable over the World Wide Web. The catalog searching routine can be located on either end (or both ends!) of such a web connection, and the output from one catalog search can be used as the input to another search.
NASA Astrophysics Data System (ADS)
Dharmaseelan, Anoop; Adistambha, Keyne D.
2015-05-01
Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.
An analysis of iterated local search for job-shop scheduling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitley, L. Darrell; Howe, Adele E.; Watson, Jean-Paul
2003-08-01
Iterated local search, or ILS, is among the most straightforward meta-heuristics for local search. ILS employs both small-step and large-step move operators. Search proceeds via iterative modifications to a single solution, in distinct alternating phases. In the first phase, local neighborhood search (typically greedy descent) is used in conjunction with the small-step operator to transform solutions into local optima. In the second phase, the large-step operator is applied to generate perturbations to the local optima obtained in the first phase. Ideally, when local neighborhood search is applied to the resulting solution, search will terminate at a different local optimum, i.e.,more » the large-step perturbations should be sufficiently large to enable escape from the attractor basins of local optima. ILS has proven capable of delivering excellent performance on numerous N P-Hard optimization problems. [LMS03]. However, despite its implicity, very little is known about why ILS can be so effective, and under what conditions. The goal of this paper is to advance the state-of-the-art in the analysis of meta-heuristics, by providing answers to this research question. They focus on characterizing both the relationship between the structure of the underlying search space and ILS performance, and the dynamic behavior of ILS. The analysis proceeds in the context of the job-shop scheduling problem (JSP) [Tai94]. They begin by demonstrating that the attractor basins of local optima in the JSP are surprisingly weak, and can be escaped with high probaiblity by accepting a short random sequence of less-fit neighbors. this result is used to develop a new ILS algorithms for the JSP, I-JAR, whose performance is competitive with tabu search on difficult benchmark instances. They conclude by developing a very accurate behavioral model of I-JAR, which yields significant insights into the dynamics of search. The analysis is based on a set of 100 random 10 x 10 problem instances, in addition to some widely used benchmark instances. Both I-JAR and the tabu search algorithm they consider are based on the N1 move operator introduced by van Laarhoven et al. [vLAL92]. The N1 operator induces a connected search space, such that it is always possible to move from an arbitrary solution to an optimal solution; this property is integral to the development of a behavioral model of I-JAR. However, much of the analysis generalizes to other move operators, including that of Nowicki and Smutnick [NS96]. Finally the models are based on the distance between two solutions, which they take as the well-known disjunctive graph distance [MBK99].« less
Toward canonical ensemble distribution from self-guided Langevin dynamics simulation
NASA Astrophysics Data System (ADS)
Wu, Xiongwu; Brooks, Bernard R.
2011-04-01
This work derives a quantitative description of the conformational distribution in self-guided Langevin dynamics (SGLD) simulations. SGLD simulations employ guiding forces calculated from local average momentums to enhance low-frequency motion. This enhancement in low-frequency motion dramatically accelerates conformational search efficiency, but also induces certain perturbations in conformational distribution. Through the local averaging, we separate properties of molecular systems into low-frequency and high-frequency portions. The guiding force effect on the conformational distribution is quantitatively described using these low-frequency and high-frequency properties. This quantitative relation provides a way to convert between a canonical ensemble and a self-guided ensemble. Using example systems, we demonstrated how to utilize the relation to obtain canonical ensemble properties and conformational distributions from SGLD simulations. This development makes SGLD not only an efficient approach for conformational searching, but also an accurate means for conformational sampling.
Optimizing an Actuator Array for the Control of Multi-Frequency Noise in Aircraft Interiors
NASA Technical Reports Server (NTRS)
Palumbo, D. L.; Padula, S. L.
1997-01-01
Techniques developed for selecting an optimized actuator array for interior noise reduction at a single frequency are extended to the multi-frequency case. Transfer functions for 64 actuators were obtained at 5 frequencies from ground testing the rear section of a fully trimmed DC-9 fuselage. A single loudspeaker facing the left side of the aircraft was the primary source. A combinatorial search procedure (tabu search) was employed to find optimum actuator subsets of from 2 to 16 actuators. Noise reduction predictions derived from the transfer functions were used as a basis for evaluating actuator subsets during optimization. Results indicate that it is necessary to constrain actuator forces during optimization. Unconstrained optimizations selected actuators which require unrealistically large forces. Two methods of constraint are evaluated. It is shown that a fast, but approximate, method yields results equivalent to an accurate, but computationally expensive, method.
NASA Astrophysics Data System (ADS)
Greenway, D. P.; Hackett, E.
2017-12-01
Under certain atmospheric refractivity conditions, propagated electromagnetic waves (EM) can become trapped between the surface and the bottom of the atmosphere's mixed layer, which is referred to as surface duct propagation. Being able to predict the presence of these surface ducts can reap many benefits to users and developers of sensing technologies and communication systems because they significantly influence the performance of these systems. However, the ability to directly measure or model a surface ducting layer is challenging due to the high spatial resolution and large spatial coverage needed to make accurate refractivity estimates for EM propagation; thus, inverse methods have become an increasingly popular way of determining atmospheric refractivity. This study uses data from the Coupled Ocean/Atmosphere Mesoscale Prediction System developed by the Naval Research Laboratory and instrumented helicopter (helo) measurements taken during the Wallops Island Field Experiment to evaluate the use of ensemble forecasts in refractivity inversions. Helo measurements and ensemble forecasts are optimized to a parametric refractivity model, and three experiments are performed to evaluate whether incorporation of ensemble forecast data aids in more timely and accurate inverse solutions using genetic algorithms. The results suggest that using optimized ensemble members as an initial population for the genetic algorithms generally enhances the accuracy and speed of the inverse solution; however, use of the ensemble data to restrict parameter search space yields mixed results. Inaccurate results are related to parameterization of the ensemble members' refractivity profile and the subsequent extraction of the parameter ranges to limit the search space.
Accurate mass measurements and their appropriate use for reliable analyte identification.
Godfrey, A Ruth; Brenton, A Gareth
2012-09-01
Accurate mass instrumentation is becoming increasingly available to non-expert users. This data can be mis-used, particularly for analyte identification. Current best practice in assigning potential elemental formula for reliable analyte identification has been described with modern informatic approaches to analyte elucidation, including chemometric characterisation, data processing and searching using facilities such as the Chemical Abstracts Service (CAS) Registry and Chemspider.
Miller, Steven C M
2015-06-01
Portable electronic devices play an important role in the management of type 1 diabetes mellitus. Electromagnetic interference from electronic devices has been shown to impair the function of an avalanche transceiver in search mode (but not in transmitting mode). This study investigates the influence of electromagnetic interference from diabetes devices on a searching avalanche beacon. The greatest distance at which an avalanche transceiver (in search mode) could accurately indicate the location of a transmitting transceiver was assessed when portable electronic devices (including an insulin pump and commonly used real-time continuous subcutaneous glucose monitoring system [rtCGMS]) were held in close proximity to each transceiver. The searching transceiver could accurately locate a transmitted signal at a distance of 30 m when used alone. This distance was unchanged by the Dexcom G4 rtCGMS, but was reduced to 10 m when the Medtronic Guardian rtCGMS was held close (within 30 cm) to the receiving beacon. Interference from the Animas Vibe insulin pump reduced this distance to 5 m, impairing the searching transceiver in a manner identical to the effect of a cell phone. Electromagnetic interference produced by some diabetes devices when held within 30 cm of a searching avalanche transceiver can impair the ability to locate a signal. Such interference could significantly compromise the outcome of a companion rescue scenario. Further investigation using other pumps and rtCGMS devices is required to evaluate all available diabetes electronics. Meantime, all electronic diabetes devices including rtCGMS and insulin pumps should not be used within 30 cm of an avalanche transceiver. Copyright © 2015 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.
Triplet supertree heuristics for the tree of life
Lin, Harris T; Burleigh, J Gordon; Eulenstein, Oliver
2009-01-01
Background There is much interest in developing fast and accurate supertree methods to infer the tree of life. Supertree methods combine smaller input trees with overlapping sets of taxa to make a comprehensive phylogenetic tree that contains all of the taxa in the input trees. The intrinsically hard triplet supertree problem takes a collection of input species trees and seeks a species tree (supertree) that maximizes the number of triplet subtrees that it shares with the input trees. However, the utility of this supertree problem has been limited by a lack of efficient and effective heuristics. Results We introduce fast hill-climbing heuristics for the triplet supertree problem that perform a step-wise search of the tree space, where each step is guided by an exact solution to an instance of a local search problem. To realize time efficient heuristics we designed the first nontrivial algorithms for two standard search problems, which greatly improve on the time complexity to the best known (naïve) solutions by a factor of n and n2 (the number of taxa in the supertree). These algorithms enable large-scale supertree analyses based on the triplet supertree problem that were previously not possible. We implemented hill-climbing heuristics that are based on our new algorithms, and in analyses of two published supertree data sets, we demonstrate that our new heuristics outperform other standard supertree methods in maximizing the number of triplets shared with the input trees. Conclusion With our new heuristics, the triplet supertree problem is now computationally more tractable for large-scale supertree analyses, and it provides a potentially more accurate alternative to existing supertree methods. PMID:19208181
Fan, Long; Hui, Jerome H L; Yu, Zu Guo; Chu, Ka Hou
2014-07-01
Species identification based on short sequences of DNA markers, that is, DNA barcoding, has emerged as an integral part of modern taxonomy. However, software for the analysis of large and multilocus barcoding data sets is scarce. The Basic Local Alignment Search Tool (BLAST) is currently the fastest tool capable of handling large databases (e.g. >5000 sequences), but its accuracy is a concern and has been criticized for its local optimization. However, current more accurate software requires sequence alignment or complex calculations, which are time-consuming when dealing with large data sets during data preprocessing or during the search stage. Therefore, it is imperative to develop a practical program for both accurate and scalable species identification for DNA barcoding. In this context, we present VIP Barcoding: a user-friendly software in graphical user interface for rapid DNA barcoding. It adopts a hybrid, two-stage algorithm. First, an alignment-free composition vector (CV) method is utilized to reduce searching space by screening a reference database. The alignment-based K2P distance nearest-neighbour method is then employed to analyse the smaller data set generated in the first stage. In comparison with other software, we demonstrate that VIP Barcoding has (i) higher accuracy than Blastn and several alignment-free methods and (ii) higher scalability than alignment-based distance methods and character-based methods. These results suggest that this platform is able to deal with both large-scale and multilocus barcoding data with accuracy and can contribute to DNA barcoding for modern taxonomy. VIP Barcoding is free and available at http://msl.sls.cuhk.edu.hk/vipbarcoding/. © 2014 John Wiley & Sons Ltd.
Cluster-search based monitoring of local earthquakes in SeisComP3
NASA Astrophysics Data System (ADS)
Roessler, D.; Becker, J.; Ellguth, E.; Herrnkind, S.; Weber, B.; Henneberger, R.; Blanck, H.
2016-12-01
We present a new cluster-search based SeisComP3 module for locating local and regional earthquakes in real time. Real-time earthquake monitoring systems such as SeisComP3 provide the backbones for earthquake early warning (EEW), tsunami early warning (TEW) and the rapid assessment of natural and induced seismicity. For any earthquake monitoring system fast and accurate event locations are fundamental determining the reliability and the impact of further analysis. SeisComP3 in the OpenSource version includes a two-stage detector for picking P waves and a phase associator for locating earthquakes based on P-wave detections. scanloc is a more advanced earthquake location program developed by gempa GmbH with seamless integration into SeisComP3. scanloc performs advanced cluster search to discriminate earthquakes occurring closely in space and time and makes additional use of S-wave detections. It has proven to provide fast and accurate earthquake locations at local and regional distances where it outperforms the base SeisComP3 tools. We demonstrate the performance of scanloc for monitoring induced seismicity as well as local and regional earthquakes in different tectonic regimes including subduction, spreading and intra-plate regions. In particular we present examples and catalogs from real-time monitoring of earthquake in Northern Chile based on data from the IPOC network by GFZ German Research Centre for Geosciences for the recent years. Depending on epicentral distance and data transmission, earthquake locations are available within a few seconds after origin time when using scanloc. The association of automatic S-wave detections provides a better constraint on focal depth.
Marufu, Takawira C; Mannings, Alexa; Moppett, Iain K
2015-12-01
Accurate peri-operative risk prediction is an essential element of clinical practice. Various risk stratification tools for assessing patients' risk of mortality or morbidity have been developed and applied in clinical practice over the years. This review aims to outline essential characteristics (predictive accuracy, objectivity, clinical utility) of currently available risk scoring tools for hip fracture patients. We searched eight databases; AMED, CINHAL, Clinical Trials.gov, Cochrane, DARE, EMBASE, MEDLINE and Web of Science for all relevant studies published until April 2015. We included published English language observational studies that considered the predictive accuracy of risk stratification tools for patients with fragility hip fracture. After removal of duplicates, 15,620 studies were screened. Twenty-nine papers met the inclusion criteria, evaluating 25 risk stratification tools. Risk stratification tools considered in more than two studies were; ASA, CCI, E-PASS, NHFS and O-POSSUM. All tools were moderately accurate and validated in multiple studies; however there are some limitations to consider. The E-PASS and O-POSSUM are comprehensive but complex, and require intraoperative data making them a challenge for use on patient bedside. The ASA, CCI and NHFS are simple, easy and inexpensive using routinely available preoperative data. Contrary to the ASA and CCI which has subjective variables in addition to other limitations, the NHFS variables are all objective. In the search for a simple and inexpensive, easy to calculate, objective and accurate tool, the NHFS may be the most appropriate of the currently available scores for hip fracture patients. However more studies need to be undertaken before it becomes a national hip fracture risk stratification or audit tool of choice. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Padilla, Mabel; Mattson, Christine L; Scheer, Susan; Udeagu, Chi-Chi N; Buskin, Susan E; Hughes, Alison J; Jaenicke, Thomas; Wohl, Amy Rock; Prejean, Joseph; Wei, Stanley C
Human immunodeficiency virus (HIV) case surveillance and other health care databases are increasingly being used for public health action, which has the potential to optimize the health outcomes of people living with HIV (PLWH). However, often PLWH cannot be located based on the contact information available in these data sources. We assessed the accuracy of contact information for PLWH in HIV case surveillance and additional data sources and whether time since diagnosis was associated with accurate contact information in HIV case surveillance and successful contact. The Case Surveillance-Based Sampling (CSBS) project was a pilot HIV surveillance system that selected a random population-based sample of people diagnosed with HIV from HIV case surveillance registries in 5 state and metropolitan areas. From November 2012 through June 2014, CSBS staff members attempted to locate and interview 1800 sampled people and used 22 data sources to search for contact information. Among 1063 contacted PLWH, HIV case surveillance data provided accurate telephone number, address, or HIV care facility information for 239 (22%), 412 (39%), and 827 (78%) sampled people, respectively. CSBS staff members used additional data sources, such as support services and commercial people-search databases, to locate and contact PLWH with insufficient contact information in HIV case surveillance. PLWH diagnosed <1 year ago were more likely to have accurate contact information in HIV case surveillance than were PLWH diagnosed ≥1 year ago ( P = .002), and the benefit from using additional data sources was greater for PLWH with more longstanding HIV infection ( P < .001). When HIV case surveillance cannot provide accurate contact information, health departments can prioritize searching additional data sources, especially for people with more longstanding HIV infection.
Long-term Doppler Shift and Line Profile Studies of Planetary Search Target Stars
NASA Technical Reports Server (NTRS)
McMillan, Robert S.
2002-01-01
This grant supported attempts to develop a method for measuring the Doppler shifts of solar-type stars more accurately. The expense of future space borne telescopes to search for solar systems like our own makes it worth trying to improve the relatively inexpensive pre-flight reconnaissance by ground-based telescopes. The concepts developed under this grant contributed to the groundwork for such improvements. They were focused on how to distinguish between extrasolar planets and stellar activity (convection) cycles. To measure the Doppler shift (radial velocity; RV) of the center of mass of a star in the presence of changing convection in the star's photosphere, one can either measure the effect of convection separately from that of the star's motion and subtract its contribution to the apparent RV, or measure the RV in a way that is insensitive to convection. This grant supported investigations into both of these approaches. We explored the use of a Fabry-Perot Etalon HE interferometer and a multichannel Fourier Transform Spectrometer (mFTS), and finished making a 1.8-m telescope operational and potentially available for this work.
Context matters: the structure of task goals affects accuracy in multiple-target visual search.
Clark, Kait; Cain, Matthew S; Adcock, R Alison; Mitroff, Stephen R
2014-05-01
Career visual searchers such as radiologists and airport security screeners strive to conduct accurate visual searches, but despite extensive training, errors still occur. A key difference between searches in radiology and airport security is the structure of the search task: Radiologists typically scan a certain number of medical images (fixed objective), and airport security screeners typically search X-rays for a specified time period (fixed duration). Might these structural differences affect accuracy? We compared performance on a search task administered either under constraints that approximated radiology or airport security. Some displays contained more than one target because the presence of multiple targets is an established source of errors for career searchers, and accuracy for additional targets tends to be especially sensitive to contextual conditions. Results indicate that participants searching within the fixed objective framework produced more multiple-target search errors; thus, adopting a fixed duration framework could improve accuracy for career searchers. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Galactic Astronomy in the Ultraviolet
NASA Astrophysics Data System (ADS)
Rastorguev, A. S.; Sachkov, M. E.; Zabolotskikh, M. V.
2017-12-01
We propose a number of prospective observational programs for the ultraviolet space observatory WSO-UV, which seem to be of great importance to modern galactic astronomy. The programs include the search for binary Cepheids; the search and detailed photometric study and the analysis of radial distribution of UV-bright stars in globular clusters ("blue stragglers", blue horizontal-branch stars, RR Lyrae variables, white dwarfs, and stars with UV excesses); the investigation of stellar content and kinematics of young open clusters and associations; the study of spectral energy distribution in hot stars, including calculation of the extinction curves in the UV, optical and NIR; and accurate definition of the relations between the UV-colors and effective temperature. The high angular resolution of the observatory allows accurate astrometric measurements of stellar proper motions and their kinematic analysis.
Path Searching Based Crease Detection for Large Scale Scanned Document Images
NASA Astrophysics Data System (ADS)
Zhang, Jifu; Li, Yi; Li, Shutao; Sun, Bin; Sun, Jun
2017-12-01
Since the large size documents are usually folded for preservation, creases will occur in the scanned images. In this paper, a crease detection method is proposed to locate the crease pixels for further processing. According to the imaging process of contactless scanners, the shading on both sides of the crease usually varies a lot. Based on this observation, a convex hull based algorithm is adopted to extract the shading information of the scanned image. Then, the possible crease path can be achieved by applying the vertical filter and morphological operations on the shading image. Finally, the accurate crease is detected via Dijkstra path searching. Experimental results on the dataset of real scanned newspapers demonstrate that the proposed method can obtain accurate locations of the creases in the large size document images.
Stang, Antonia Schirmer; Hartling, Lisa; Fera, Cassandra; Johnson, David; Ali, Samina
2014-01-01
Evidence indicates that pain is undertreated in the emergency department (ED). The first step in improving the pain experience for ED patients is to accurately and systematically assess the actual care being provided. Identifying gaps in the assessment and treatment of pain and improving patient outcomes requires relevant, evidence-based performance measures. To systematically review the literature and identify quality indicators specific to the assessment and management of pain in the ED. Four major bibliographical databases were searched from January 1980 to December 2010, and relevant journals and conference proceedings were manually searched. Original research that described the development or collection of data on one or more quality indicators relevant to the assessment or management of pain in the ED was included. The search identified 18,078 citations. Twenty-three articles were included: 15 observational (cohort) studies; three before-after studies; three audits; one quality indicator development study; and one survey. Methodological quality was moderate, with weaknesses in the reporting of study design and methodology. Twenty unique indicators were identified, with the majority (16 of 20) measuring care processes. Overall, 91% (21 of 23) of the studies reported indicators for the assessment or management of presenting pain, as opposed to procedural pain. Three of the studies included children; however, none of the indicators were developed specifically for a pediatric population. Gaps in the existing literature include a lack of measures reflecting procedural pain, patient outcomes and the pediatric population. Future efforts should focus on developing indicators specific to these key areas.
Joyce, Brendan; Lee, Danny; Rubio, Alex; Ogurtsov, Aleksey; Alves, Gelio; Yu, Yi-Kuo
2018-03-15
RAId is a software package that has been actively developed for the past 10 years for computationally and visually analyzing MS/MS data. Founded on rigorous statistical methods, RAId's core program computes accurate E-values for peptides and proteins identified during database searches. Making this robust tool readily accessible for the proteomics community by developing a graphical user interface (GUI) is our main goal here. We have constructed a graphical user interface to facilitate the use of RAId on users' local machines. Written in Java, RAId_GUI not only makes easy executions of RAId but also provides tools for data/spectra visualization, MS-product analysis, molecular isotopic distribution analysis, and graphing the retrieval versus the proportion of false discoveries. The results viewer displays and allows the users to download the analyses results. Both the knowledge-integrated organismal databases and the code package (containing source code, the graphical user interface, and a user manual) are available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/raid.html .
Jacobs, Peter L; Ridder, Lars; Ruijken, Marco; Rosing, Hilde; Jager, Nynke Gl; Beijnen, Jos H; Bas, Richard R; van Dongen, William D
2013-09-01
Comprehensive identification of human drug metabolites in first-in-man studies is crucial to avoid delays in later stages of drug development. We developed an efficient workflow for systematic identification of human metabolites in plasma or serum that combines metabolite prediction, high-resolution accurate mass LC-MS and MS vendor independent data processing. Retrospective evaluation of predictions for 14 (14)C-ADME studies published in the period 2007-January 2012 indicates that on average 90% of the major metabolites in human plasma can be identified by searching for accurate masses of predicted metabolites. Furthermore, the workflow can identify unexpected metabolites in the same processing run, by differential analysis of samples of drug-dosed subjects and (placebo-dosed, pre-dose or otherwise blank) control samples. To demonstrate the utility of the workflow we applied it to identify tamoxifen metabolites in serum of a breast cancer patient treated with tamoxifen. Previously published metabolites were confirmed in this study and additional metabolites were identified, two of which are discussed to illustrate the advantages of the workflow.
An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.
An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172
NASA Astrophysics Data System (ADS)
Keane, Tommy P.; Saber, Eli; Rhody, Harvey; Savakis, Andreas; Raj, Jeffrey
2012-04-01
Contemporary research in automated panorama creation utilizes camera calibration or extensive knowledge of camera locations and relations to each other to achieve successful results. Research in image registration attempts to restrict these same camera parameters or apply complex point-matching schemes to overcome the complications found in real-world scenarios. This paper presents a novel automated panorama creation algorithm by developing an affine transformation search based on maximized mutual information (MMI) for region-based registration. Standard MMI techniques have been limited to applications with airborne/satellite imagery or medical images. We show that a novel MMI algorithm can approximate an accurate registration between views of realistic scenes of varying depth distortion. The proposed algorithm has been developed using stationary, color, surveillance video data for a scenario with no a priori camera-to-camera parameters. This algorithm is robust for strict- and nearly-affine-related scenes, while providing a useful approximation for the overlap regions in scenes related by a projective homography or a more complex transformation, allowing for a set of efficient and accurate initial conditions for pixel-based registration.
Does linear separability really matter? Complex visual search is explained by simple search
Vighneshvel, T.; Arun, S. P.
2013-01-01
Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822
Lavine, Barry K; White, Collin G; Allen, Matthew D; Weakley, Andrew
2017-03-01
Multilayered automotive paint fragments, which are one of the most complex materials encountered in the forensic science laboratory, provide crucial links in criminal investigations and prosecutions. To determine the origin of these paint fragments, forensic automotive paint examiners have turned to the paint data query (PDQ) database, which allows the forensic examiner to compare the layer sequence and color, texture, and composition of the sample to paint systems of the original equipment manufacturer (OEM). However, modern automotive paints have a thin color coat and this layer on a microscopic fragment is often too thin to obtain accurate chemical and topcoat color information. A search engine has been developed for the infrared (IR) spectral libraries of the PDQ database in an effort to improve discrimination capability and permit quantification of discrimination power for OEM automotive paint comparisons. The similarity of IR spectra of the corresponding layers of various records for original finishes in the PDQ database often results in poor discrimination using commercial library search algorithms. A pattern recognition approach employing pre-filters and a cross-correlation library search algorithm that performs both a forward and backward search has been used to significantly improve the discrimination of IR spectra in the PDQ database and thus improve the accuracy of the search. This improvement permits inter-comparison of OEM automotive paint layer systems using the IR spectra alone. Such information can serve to quantify the discrimination power of the original automotive paint encountered in casework and further efforts to succinctly communicate trace evidence to the courts.
Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.
Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile
2016-01-01
This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.
Generating Personalized Web Search Using Semantic Context
Xu, Zheng; Chen, Hai-Yan; Yu, Jie
2015-01-01
The “one size fits the all” criticism of search engines is that when queries are submitted, the same results are returned to different users. In order to solve this problem, personalized search is proposed, since it can provide different search results based upon the preferences of users. However, existing methods concentrate more on the long-term and independent user profile, and thus reduce the effectiveness of personalized search. In this paper, the method captures the user context to provide accurate preferences of users for effectively personalized search. First, the short-term query context is generated to identify related concepts of the query. Second, the user context is generated based on the click through data of users. Finally, a forgetting factor is introduced to merge the independent user context in a user session, which maintains the evolution of user preferences. Experimental results fully confirm that our approach can successfully represent user context according to individual user information needs. PMID:26000335
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Mitchell; Thompson, Aidan P.
The purpose of this short contribution is to report on the development of a Spectral Neighbor Analysis Potential (SNAP) for tungsten. We have focused on the characterization of elastic and defect properties of the pure material in order to support molecular dynamics simulations of plasma-facing materials in fusion reactors. A parallel genetic algorithm approach was used to efficiently search for fitting parameters optimized against a large number of objective functions. In addition, we have shown that this many-body tungsten potential can be used in conjunction with a simple helium pair potential1 to produce accurate defect formation energies for the W-Hemore » binary system.« less
Book Review: Astronomy: A Self-Teaching Guide, 6th Edition
NASA Astrophysics Data System (ADS)
Marigza, R. N., Jr.
2009-03-01
The sixth edition of Moche's book is up-to-date with the latest in astronomy. It contains accurate astronomical data on stars and constellations. The topics are incorporated with web site addresses for the reader to expand his/her knowledge and see high-resolution images of the celestial targets. This edition incorporates new discoveries and suggestions made prior to the first editions. Among the new developments is the twenty-first-century research into black holes, active galaxies and quasars, searches for life in space, origin and structure of our universe, and the latest in ground and space telescopes.
A high-performance seizure detection algorithm based on Discrete Wavelet Transform (DWT) and EEG
Chen, Duo; Wan, Suiren; Xiang, Jing; Bao, Forrest Sheng
2017-01-01
In the past decade, Discrete Wavelet Transform (DWT), a powerful time-frequency tool, has been widely used in computer-aided signal analysis of epileptic electroencephalography (EEG), such as the detection of seizures. One of the important hurdles in the applications of DWT is the settings of DWT, which are chosen empirically or arbitrarily in previous works. The objective of this study aimed to develop a framework for automatically searching the optimal DWT settings to improve accuracy and to reduce computational cost of seizure detection. To address this, we developed a method to decompose EEG data into 7 commonly used wavelet families, to the maximum theoretical level of each mother wavelet. Wavelets and decomposition levels providing the highest accuracy in each wavelet family were then searched in an exhaustive selection of frequency bands, which showed optimal accuracy and low computational cost. The selection of frequency bands and features removed approximately 40% of redundancies. The developed algorithm achieved promising performance on two well-tested EEG datasets (accuracy >90% for both datasets). The experimental results of the developed method have demonstrated that the settings of DWT affect its performance on seizure detection substantially. Compared with existing seizure detection methods based on wavelet, the new approach is more accurate and transferable among datasets. PMID:28278203
Adams, Jean; Hillier-Brown, Frances C; Moore, Helen J; Lake, Amelia A; Araujo-Soares, Vera; White, Martin; Summerbell, Carolyn
2016-09-29
Grey literature includes a range of documents not controlled by commercial publishing organisations. This means that grey literature can be difficult to search and retrieve for evidence synthesis. Much knowledge and evidence in public health, and other fields, accumulates from innovation in practice. This knowledge may not even be of sufficient formality to meet the definition of grey literature. We term this knowledge 'grey information'. Grey information may be even harder to search for and retrieve than grey literature. On three previous occasions, we have attempted to systematically search for and synthesise public health grey literature and information-both to summarise the extent and nature of particular classes of interventions and to synthesise results of evaluations. Here, we briefly describe these three 'case studies' but focus on our post hoc critical reflections on searching for and synthesising grey literature and information garnered from our experiences of these case studies. We believe these reflections will be useful to future researchers working in this area. Issues discussed include search methods, searching efficiency, replicability of searches, data management, data extraction, assessing study 'quality', data synthesis, time and resources, and differentiating evidence synthesis from primary research. Information on applied public health research questions relating to the nature and range of public health interventions, as well as many evaluations of these interventions, may be predominantly, or only, held in grey literature and grey information. Evidence syntheses on these topics need, therefore, to embrace grey literature and information. Many typical systematic review methods for searching, appraising, managing, and synthesising the evidence base can be adapted for use with grey literature and information. Evidence synthesisers should carefully consider the opportunities and problems offered by including grey literature and information. Enhanced incentives for accurate recording and further methodological developments in retrieval will facilitate future syntheses of grey literature and information.
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; ...
2013-03-07
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less
CAE "FOCUS" for modelling and simulating electron optics systems: development and application
NASA Astrophysics Data System (ADS)
Trubitsyn, Andrey; Grachev, Evgeny; Gurov, Victor; Bochkov, Ilya; Bochkov, Victor
2017-02-01
Electron optics is a theoretical base of scientific instrument engineering. Mathematical simulation of occurring processes is a base for contemporary design of complicated devices of the electron optics. Problems of the numerical mathematical simulation are effectively solved by CAE system means. CAE "FOCUS" developed by the authors includes fast and accurate methods: boundary element method (BEM) for the electric field calculation, Runge-Kutta- Fieghlberg method for the charged particle trajectory computation controlling an accuracy of calculations, original methods for search of terms for the angular and time-of-flight focusing. CAE "FOCUS" is organized as a collection of modules each of which solves an independent (sub) task. A range of physical and analytical devices, in particular a microfocus X-ray tube of high power, has been developed using this soft.
NASA Astrophysics Data System (ADS)
Sizov, Gennadi Y.
In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.
Attentional Control via Parallel Target-Templates in Dual-Target Search
Barrett, Doug J. K.; Zobay, Oliver
2014-01-01
Simultaneous search for two targets has been shown to be slower and less accurate than independent searches for the same two targets. Recent research suggests this ‘dual-target cost’ may be attributable to a limit in the number of target-templates than can guide search at any one time. The current study investigated this possibility by comparing behavioural responses during single- and dual-target searches for targets defined by their orientation. The results revealed an increase in reaction times for dual- compared to single-target searches that was largely independent of the number of items in the display. Response accuracy also decreased on dual- compared to single-target searches: dual-target accuracy was higher than predicted by a model restricting search guidance to a single target-template and lower than predicted by a model simulating two independent single-target searches. These results are consistent with a parallel model of dual-target search in which attentional control is exerted by more than one target-template at a time. The requirement to maintain two target-templates simultaneously, however, appears to impose a reduction in the specificity of the memory representation that guides search for each target. PMID:24489793
DeSmitt, Holly J; Domire, Zachary J
2016-12-01
Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.
NASA Astrophysics Data System (ADS)
Lee, Feifei; Kotani, Koji; Chen, Qiu; Ohmi, Tadahiro
2010-02-01
In this paper, a fast search algorithm for MPEG-4 video clips from video database is proposed. An adjacent pixel intensity difference quantization (APIDQ) histogram is utilized as the feature vector of VOP (video object plane), which had been reliably applied to human face recognition previously. Instead of fully decompressed video sequence, partially decoded data, namely DC sequence of the video object are extracted from the video sequence. Combined with active search, a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by total 15 hours of video contained of TV programs such as drama, talk, news, etc. to search for given 200 MPEG-4 video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 2 % in drama and news categories are achieved, which are more accurately and robust than conventional fast video search algorithm.
SPARK: Adapting Keyword Query to Semantic Search
NASA Astrophysics Data System (ADS)
Zhou, Qi; Wang, Chong; Xiong, Miao; Wang, Haofen; Yu, Yong
Semantic search promises to provide more accurate result than present-day keyword search. However, progress with semantic search has been delayed due to the complexity of its query languages. In this paper, we explore a novel approach of adapting keywords to querying the semantic web: the approach automatically translates keyword queries into formal logic queries so that end users can use familiar keywords to perform semantic search. A prototype system named 'SPARK' has been implemented in light of this approach. Given a keyword query, SPARK outputs a ranked list of SPARQL queries as the translation result. The translation in SPARK consists of three major steps: term mapping, query graph construction and query ranking. Specifically, a probabilistic query ranking model is proposed to select the most likely SPARQL query. In the experiment, SPARK achieved an encouraging translation result.
Raj, S; Sharma, V L; Singh, A J; Goel, S
2016-01-01
Background. The available health information on websites should be reliable and accurate in order to make informed decisions by community. This study was done to assess the quality and readability of health information websites on World Wide Web in India. Methods. This cross-sectional study was carried out in June 2014. The key words "Health" and "Information" were used on search engines "Google" and "Yahoo." Out of 50 websites (25 from each search engines), after exclusion, 32 websites were evaluated. LIDA tool was used to assess the quality whereas the readability was assessed using Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), and SMOG. Results. Forty percent of websites (n = 13) were sponsored by government. Health On the Net Code of Conduct (HONcode) certification was present on 50% (n = 16) of websites. The mean LIDA score (74.31) was average. Only 3 websites scored high on LIDA score. Only five had readability scores at recommended sixth-grade level. Conclusion. Most health information websites had average quality especially in terms of usability and reliability and were written at high readability levels. Efforts are needed to develop the health information websites which can help general population in informed decision making.
NASA Astrophysics Data System (ADS)
Mackay, D. Scott; Band, Lawrence E.
1998-04-01
This paper presents a new method for extracting flow directions, contributing (upslope) areas, and nested catchments from digital elevation models in lake-dominated areas. Existing tools for acquiring descriptive variables of the topography, such as surface flow directions and contributing areas, were developed for moderate to steep topography. These tools are typically difficult to apply in gentle topography owing to limitations in explicitly handling lakes and other flat areas. This paper addresses the problem of accurately representing general topographic features by first identifying distinguishing features, such as lakes, in gentle topography areas and then using these features to guide the search for topographic flow directions and catchment marking. Lakes are explicitly represented in the topology of a watershed for use in water routing. Nonlake flat features help guide the search for topographic flow directions in areas of low signal to noise. This combined feature-based and grid-based search for topographic features yields improved contributing areas and watershed boundaries where there are lakes and other flat areas. Lakes are easily classified from remotely sensed imagery, which makes automated representation of lakes as subsystems within a watershed system tractable with widely available data sets.
Reduction of astrometric plates
NASA Technical Reports Server (NTRS)
Stock, J.
1984-01-01
A rapid and accurate method for the reduction of comet or asteroid plates is described. Projection equations, scale length correction, rotation of coordinates, linearization, the search for additional reference stars, and the final solution are examined.
Reading and visual search: a developmental study in normal children.
Seassau, Magali; Bucci, Maria-Pia
2013-01-01
Studies dealing with developmental aspects of binocular eye movement behaviour during reading are scarce. In this study we have explored binocular strategies during reading and during visual search tasks in a large population of normal young readers. Binocular eye movements were recorded using an infrared video-oculography system in sixty-nine children (aged 6 to 15) and in a group of 10 adults (aged 24 to 39). The main findings are (i) in both tasks the number of progressive saccades (to the right) and regressive saccades (to the left) decreases with age; (ii) the amplitude of progressive saccades increases with age in the reading task only; (iii) in both tasks, the duration of fixations as well as the total duration of the task decreases with age; (iv) in both tasks, the amplitude of disconjugacy recorded during and after the saccades decreases with age; (v) children are significantly more accurate in reading than in visual search after 10 years of age. Data reported here confirms and expands previous studies on children's reading. The new finding is that younger children show poorer coordination than adults, both while reading and while performing a visual search task. Both reading skills and binocular saccades coordination improve with age and children reach a similar level to adults after the age of 10. This finding is most likely related to the fact that learning mechanisms responsible for saccade yoking develop during childhood until adolescence.
Modelling eye movements in a categorical search task
Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris
2013-01-01
We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720
Davis, David A; Mazmanian, Paul E; Fordis, Michael; Van Harrison, R; Thorpe, Kevin E; Perrier, Laure
2006-09-06
Core physician activities of lifelong learning, continuing medical education credit, relicensure, specialty recertification, and clinical competence are linked to the abilities of physicians to assess their own learning needs and choose educational activities that meet these needs. To determine how accurately physicians self-assess compared with external observations of their competence. The electronic databases MEDLINE (1966-July 2006), EMBASE (1980-July 2006), CINAHL (1982-July 2006), PsycINFO (1967-July 2006), the Research and Development Resource Base in CME (1978-July 2006), and proprietary search engines were searched using terms related to self-directed learning, self-assessment, and self-reflection. Studies were included if they compared physicians' self-rated assessments with external observations, used quantifiable and replicable measures, included a study population of at least 50% practicing physicians, residents, or similar health professionals, and were conducted in the United Kingdom, Canada, United States, Australia, or New Zealand. Studies were excluded if they were comparisons of self-reports, studies of medical students, assessed physician beliefs about patient status, described the development of self-assessment measures, or were self-assessment programs of specialty societies. Studies conducted in the context of an educational or quality improvement intervention were included only if comparative data were obtained before the intervention. Study population, content area and self-assessment domain of the study, methods used to measure the self-assessment of study participants and those used to measure their competence or performance, existence and use of statistical tests, study outcomes, and explanatory comparative data were extracted. The search yielded 725 articles, of which 17 met all inclusion criteria. The studies included a wide range of domains, comparisons, measures, and methodological rigor. Of the 20 comparisons between self- and external assessment, 13 demonstrated little, no, or an inverse relationship and 7 demonstrated positive associations. A number of studies found the worst accuracy in self-assessment among physicians who were the least skilled and those who were the most confident. These results are consistent with those found in other professions. While suboptimal in quality, the preponderance of evidence suggests that physicians have a limited ability to accurately self-assess. The processes currently used to undertake professional development and evaluate competence may need to focus more on external assessment.
Nayor, Jennifer; Borges, Lawrence F; Goryachev, Sergey; Gainer, Vivian S; Saltzman, John R
2018-07-01
ADR is a widely used colonoscopy quality indicator. Calculation of ADR is labor-intensive and cumbersome using current electronic medical databases. Natural language processing (NLP) is a method used to extract meaning from unstructured or free text data. (1) To develop and validate an accurate automated process for calculation of adenoma detection rate (ADR) and serrated polyp detection rate (SDR) on data stored in widely used electronic health record systems, specifically Epic electronic health record system, Provation ® endoscopy reporting system, and Sunquest PowerPath pathology reporting system. Screening colonoscopies performed between June 2010 and August 2015 were identified using the Provation ® reporting tool. An NLP pipeline was developed to identify adenomas and sessile serrated polyps (SSPs) on pathology reports corresponding to these colonoscopy reports. The pipeline was validated using a manual search. Precision, recall, and effectiveness of the natural language processing pipeline were calculated. ADR and SDR were then calculated. We identified 8032 screening colonoscopies that were linked to 3821 pathology reports (47.6%). The NLP pipeline had an accuracy of 100% for adenomas and 100% for SSPs. Mean total ADR was 29.3% (range 14.7-53.3%); mean male ADR was 35.7% (range 19.7-62.9%); and mean female ADR was 24.9% (range 9.1-51.0%). Mean total SDR was 4.0% (0-9.6%). We developed and validated an NLP pipeline that accurately and automatically calculates ADRs and SDRs using data stored in Epic, Provation ® and Sunquest PowerPath. This NLP pipeline can be used to evaluate colonoscopy quality parameters at both individual and practice levels.
Magnetic Field Generation and B-Dot Sensor Characterization in the High Frequency Band
2012-03-01
date Dr. Andrew J, Terzuoli, PhD (Member) date Dr. Michael J. Havrilla, PhD (Member) date AFIT/GE/ENG/12-20 Abstract Designing a high frequency ( HF ...large wavelengths in the HF range make it difficult to accurately estimate from which direction a magnetic field is emitting. Accurate DF estimates are...necessary for search and rescue operations and geolocating RF emitters of interest. The primary goal of this research is to characterize the
NASA Astrophysics Data System (ADS)
Prabhat, Prashant; Peet, Michael; Erdogan, Turan
2016-03-01
In order to design a fluorescence experiment, typically the spectra of a fluorophore and of a filter set are overlaid on a single graph and the spectral overlap is evaluated intuitively. However, in a typical fluorescence imaging system the fluorophores and optical filters are not the only wavelength dependent variables - even the excitation light sources have been changing. For example, LED Light Engines may have a significantly different spectral response compared to the traditional metal-halide lamps. Therefore, for a more accurate assessment of fluorophore-to-filter-set compatibility, all sources of spectral variation should be taken into account simultaneously. Additionally, intuitive or qualitative evaluation of many spectra does not necessarily provide a realistic assessment of the system performance. "SearchLight" is a freely available web-based spectral plotting and analysis tool that can be used to address the need for accurate, quantitative spectral evaluation of fluorescence measurement systems. This tool is available at: http://searchlight.semrock.com/. Based on a detailed mathematical framework [1], SearchLight calculates signal, noise, and signal-to-noise ratio for multiple combinations of fluorophores, filter sets, light sources and detectors. SearchLight allows for qualitative and quantitative evaluation of the compatibility of filter sets with fluorophores, analysis of bleed-through, identification of optimized spectral edge locations for a set of filters under specific experimental conditions, and guidance regarding labeling protocols in multiplexing imaging assays. Entire SearchLight sessions can be shared with colleagues and collaborators and saved for future reference. [1] Anderson, N., Prabhat, P. and Erdogan, T., Spectral Modeling in Fluorescence Microscopy, http://www.semrock.com (2010).
Auto-biometric for M-mode echocardiography
NASA Astrophysics Data System (ADS)
Zhang, Wei; Park, Jinhyong; Zhou, S. Kevin
2010-03-01
In this paper we present a system for fast and accurate detection of anatomical structures (calipers) in M-mode images. The task is challenging because of dramatic variations in their appearances. We propose to solve the problem in a progressive manner, which ensures both robustness and efficiency. It first obtains rough caliper localization using the intensity profile image. Then run a constrained search for accurate caliper positions. Markov Random Field (MRF) and warping image detectors are used for jointly considering appearance information and the geometric relationship between calipers. Extensive experiments show that our system achieves more accurate results and uses less time in comparison with previously reported work.
Alderdice, Fiona; Gargan, Phyl; McCall, Emma; Franck, Linda
2018-01-30
Online resources are a source of information for parents of premature babies when their baby is discharged from hospital. To explore what topics parents deemed important after returning home from hospital with their premature baby and to evaluate the quality of existing websites that provide information for parents post-discharge. In stage 1, 23 parents living in Northern Ireland participated in three focus groups and shared their information and support needs following the discharge of their infant(s). In stage 2, a World Wide Web (WWW) search was conducted using Google, Yahoo and Bing search engines. Websites meeting pre-specified inclusion criteria were reviewed using two website assessment tools and by calculating a readability score. Website content was compared to the topics identified by parents in the focus groups. Five overarching topics were identified across the three focus groups: life at home after neonatal care, taking care of our family, taking care of our premature baby, baby's growth and development and help with getting support and advice. Twenty-nine sites were identified that met the systematic web search inclusion criteria. Fifteen (52%) covered all five topics identified by parents to some extent and 9 (31%) provided current, accurate and relevant information based on the assessment criteria. Parents reported the need for information and support post-discharge from hospital. This was not always available to them, and relevant online resources were of varying quality. Listening to parents needs and preferences can facilitate the development of high-quality, evidence-based, parent-centred resources. © 2018 The Authors Health Expectations published by John Wiley & Sons Ltd.
Directing the public to evidence-based online content
Cooper, Crystale Purvis; Gelb, Cynthia A; Vaughn, Alexandra N; Smuland, Jenny; Hughes, Alexandra G; Hawkins, Nikki A
2015-01-01
To direct online users searching for gynecologic cancer information to accurate content, the Centers for Disease Control and Prevention’s (CDC) ‘Inside Knowledge: Get the Facts About Gynecologic Cancer’ campaign sponsored search engine advertisements in English and Spanish. From June 2012 to August 2013, advertisements appeared when US Google users entered search terms related to gynecologic cancer. Users who clicked on the advertisements were directed to relevant content on the CDC website. Compared with the 3 months before the initiative (March–May 2012), visits to the CDC web pages linked to the advertisements were 26 times higher after the initiative began (June–August 2012) (p<0.01), and 65 times higher when the search engine advertisements were supplemented with promotion on television and additional websites (September 2012–August 2013) (p<0.01). Search engine advertisements can direct users to evidence-based content at a highly teachable moment—when they are seeking relevant information. PMID:25053580
Optimal directed searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Ming, Jing; Krishnan, Badri; Papa, Maria Alessandra; Aulbert, Carsten; Fehrmann, Henning
2016-03-01
Wide parameter space searches for long-lived continuous gravitational wave signals are computationally limited. It is therefore critically important that the available computational resources are used rationally. In this paper we consider directed searches, i.e., targets for which the sky position is known accurately but the frequency and spin-down parameters are completely unknown. Given a list of such potential astrophysical targets, we therefore need to prioritize. On which target(s) should we spend scarce computing resources? What parameter space region in frequency and spin-down should we search through? Finally, what is the optimal search setup that we should use? In this paper we present a general framework that allows us to solve all three of these problems. This framework is based on maximizing the probability of making a detection subject to a constraint on the maximum available computational cost. We illustrate the method for a simplified problem.
Competitive code-based fast palmprint identification using a set of cover trees
NASA Astrophysics Data System (ADS)
Yue, Feng; Zuo, Wangmeng; Zhang, David; Wang, Kuanquan
2009-06-01
A palmprint identification system recognizes a query palmprint image by searching for its nearest neighbor from among all the templates in a database. When applied on a large-scale identification system, it is often necessary to speed up the nearest-neighbor searching process. We use competitive code, which has very fast feature extraction and matching speed, for palmprint identification. To speed up the identification process, we extend the cover tree method and propose to use a set of cover trees to facilitate the fast and accurate nearest-neighbor searching. We can use the cover tree method because, as we show, the angular distance used in competitive code can be decomposed into a set of metrics. Using the Hong Kong PolyU palmprint database (version 2) and a large-scale palmprint database, our experimental results show that the proposed method searches for nearest neighbors faster than brute force searching.
Visual Search Efficiency is Greater for Human Faces Compared to Animal Faces
Simpson, Elizabeth A.; Mertins, Haley L.; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V.
2015-01-01
The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similarly efficient searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage. PMID:24962122
tRNAscan-SE On-line: integrating search and context for analysis of transfer RNA genes.
Lowe, Todd M; Chan, Patricia P
2016-07-08
High-throughput genome sequencing continues to grow the need for rapid, accurate genome annotation and tRNA genes constitute the largest family of essential, ever-present non-coding RNA genes. Newly developed tRNAscan-SE 2.0 has advanced the state-of-the-art methodology in tRNA gene detection and functional prediction, captured by rich new content of the companion Genomic tRNA Database. Previously, web-server tRNA detection was isolated from knowledge of existing tRNAs and their annotation. In this update of the tRNAscan-SE On-line resource, we tie together improvements in tRNA classification with greatly enhanced biological context via dynamically generated links between web server search results, the most relevant genes in the GtRNAdb and interactive, rich genome context provided by UCSC genome browsers. The tRNAscan-SE On-line web server can be accessed at http://trna.ucsc.edu/tRNAscan-SE/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Semi-automating the manual literature search for systematic reviews increases efficiency.
Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald
2010-03-01
To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.
Towards an SEMG-based tele-operated robot for masticatory rehabilitation.
Kalani, Hadi; Moghimi, Sahar; Akbarzadeh, Alireza
2016-08-01
This paper proposes a real-time trajectory generation for a masticatory rehabilitation robot based on surface electromyography (SEMG) signals. We used two Gough-Stewart robots. The first robot was used as a rehabilitation robot while the second robot was developed to model the human jaw system. The legs of the rehabilitation robot were controlled by the SEMG signals of a tele-operator to reproduce the masticatory motion in the human jaw, supposedly mounted on the moving platform, through predicting the location of a reference point. Actual jaw motions and the SEMG signals from the masticatory muscles were recorded and used as output and input, respectively. Three different methods, namely time-delayed neural networks, time delayed fast orthogonal search, and time-delayed Laguerre expansion technique, were employed and compared to predict the kinematic parameters. The optimal model structures as well as the input delays were obtained for each model and each subject through a genetic algorithm. Equations of motion were obtained by the virtual work method. Fuzzy method was employed to develop a fuzzy impedance controller. Moreover, a jaw model was developed to demonstrate the time-varying behavior of the muscle lengths during the rehabilitation process. The three modeling methods were capable of providing reasonably accurate estimations of the kinematic parameters, although the accuracy and training/validation speed of time-delayed fast orthogonal search were higher than those of the other two aforementioned methods. Also, during a simulation study, the fuzzy impedance scheme proved successful in controlling the moving platform for the accurate navigation of the reference point in the desired trajectory. SEMG has been widely used as a control command for prostheses and exoskeleton robots. However, in the current study by employing the proposed rehabilitation robot the complete continuous profile of the clenching motion was reproduced in the sagittal plane. Copyright © 2016. Published by Elsevier Ltd.
Modeling and prediction of human word search behavior in interactive machine translation
NASA Astrophysics Data System (ADS)
Ji, Duo; Yu, Bai; Ma, Bin; Ye, Na
2017-12-01
As a kind of computer aided translation method, Interactive Machine Translation technology reduced manual translation repetitive and mechanical operation through a variety of methods, so as to get the translation efficiency, and played an important role in the practical application of the translation work. In this paper, we regarded the behavior of users' frequently searching for words in the translation process as the research object, and transformed the behavior to the translation selection problem under the current translation. The paper presented a prediction model, which is a comprehensive utilization of alignment model, translation model and language model of the searching words behavior. It achieved a highly accurate prediction of searching words behavior, and reduced the switching of mouse and keyboard operations in the users' translation process.
Dehaene, S
1989-07-01
Treisman and Gelade's (1980) feature-integration theory of attention states that a scene must be serially scanned before the objects in it can be accurately perceived. Is serial scanning compatible with the speed observed in the perception of real-world scenes? Most real scenes consist of many more dimensions (color, size, shape, depth, etc.) than those generally found in search paradigms. Furthermore, real objects differ from each other along many of these dimensions. The present experiment assessed the influence of the total number of dimensions and target/distractor discriminability (the number of dimensions that suffice to separate a target from distractors) on search times for a conjunction of features. Search was always found to be serial. However, for the most discriminable targets, search rate was so fast that search times were in the same range as pop-out detection times. Apparently, greater discriminability enables subjects to direct attention at a faster rate and at only a fraction of the items in a scene.
NASA Technical Reports Server (NTRS)
Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.
2000-01-01
Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.
Spatial partitions systematize visual search and enhance target memory.
Solman, Grayden J F; Kingstone, Alan
2017-02-01
Humans are remarkably capable of finding desired objects in the world, despite the scale and complexity of naturalistic environments. Broadly, this ability is supported by an interplay between exploratory search and guidance from episodic memory for previously observed target locations. Here we examined how the environment itself may influence this interplay. In particular, we examined how partitions in the environment-like buildings, rooms, and furniture-can impact memory during repeated search. We report that the presence of partitions in a display, independent of item configuration, reliably improves episodic memory for item locations. Repeated search through partitioned displays was faster overall and was characterized by more rapid ballistic orienting in later repetitions. Explicit recall was also both faster and more accurate when displays were partitioned. Finally, we found that search paths were more regular and systematic when displays were partitioned. Given the ubiquity of partitions in real-world environments, these results provide important insights into the mechanisms of naturalistic search and its relation to memory.
eQuilibrator--the biochemical thermodynamics calculator.
Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron
2012-01-01
The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like 'how much Gibbs energy is released by ATP hydrolysis at pH 5?' are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use.
eQuilibrator—the biochemical thermodynamics calculator
Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron
2012-01-01
The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like ‘how much Gibbs energy is released by ATP hydrolysis at pH 5?’ are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use. PMID:22064852
NASA Astrophysics Data System (ADS)
Kluber, Alexander; Hayre, Robert; Cox, Daniel
2012-02-01
Motivated by the need to find beta-structure aggregation nuclei for the polyQ diseases such as Huntington's, we have undertaken a search for length dependent structure in model polyglutamine proteins. We use the Onufriev-Bashford-Case (OBC) generalized Born implicit solvent GPU based AMBER11 molecular dynamics with the parm96 force field coupled with a replica exchange method to characterize monomeric strands of polyglutamine as a function of chain length and temperature. This force field and solvation method has been shown among other methods to accurately reproduce folded metastability in certain small peptides, and to yield accurately de novo folded structures in a millisecond time-scale protein. Using GPU molecular dynamics we can sample out into the microsecond range. Additionally, explicit solvent runs will be used to verify results from the implicit solvent runs. We will assess order using measures of secondary structure and hydrogen bond content.
Anterior mitral valve aneurysm: a rare sequelae of aortic valve endocarditis.
Janardhanan, Rajesh; Kamal, Muhammad Umar; Riaz, Irbaz Bin; Smith, M Cristy
2016-03-01
SummaryIn intravenous drug abusers, infective endocarditis usually involves right-sided valves, with Staphylococcus aureus being the most common etiologic agent. We present a patient who is an intravenous drug abuser with left-sided (aortic valve) endocarditis caused by Enterococcus faecalis who subsequently developed an anterior mitral valve aneurysm, which is an exceedingly rare complication. A systematic literature search was conducted which identified only five reported cases in the literature of mitral valve aneurysmal rupture in the setting of E. faecalis endocarditis. Real-time 3D-transesophageal echocardiography was critical in making an accurate diagnosis leading to timely intervention. Early recognition of a mitral valve aneurysm (MVA) is important because it may rupture and produce catastrophic mitral regurgitation (MR) in an already seriously ill patient requiring emergency surgery, or it may be overlooked at the time of aortic valve replacement (AVR).Real-time 3D-transesophageal echocardiography (RT-3DTEE) is much more advanced and accurate than transthoracic echocardiography for the diagnosis and management of MVA. © 2016 The authors.
High-Throughput Histopathological Image Analysis via Robust Cell Segmentation and Hashing
Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting
2015-01-01
Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells. PMID:26599156
NASA Technical Reports Server (NTRS)
Paxton, Laurel
2012-01-01
One of the next steps in the exoplanet search is the development of occulter technology. Starlight suppression for a telescope would provide the ability to more accurately find and characterize potential true-Earth analogs. Coronagraphs have been the subject of much research in recent years but have yet to prove themselves a feasible approach. Attention has now turned to external occulters or starshades. A large occulting mask in front of a telescope should provide a comparable optical resolution to a coronagraph. Under a TDEM grant, a proposed starshade design was demonstrated to exceed coronagraph resolution by at least an order of magnitude. The current project is to demonstrate that the current design can be manufactured and then properly deployed. 4 sample starshade petals were constructed, ready to be attached to a pre-existing deployment truss. Time was spent detailing and modifying the petal construction process, so that future petals could be constructed at a more accurate and faster pace.
An efficient algorithm for the retarded time equation for noise from rotating sources
NASA Astrophysics Data System (ADS)
Loiodice, S.; Drikakis, D.; Kokkalis, A.
2018-01-01
This study concerns modelling of noise emanating from rotating sources such as helicopter rotors. We present an accurate and efficient algorithm for the solution of the retarded time equation, which can be used both in subsonic and supersonic flow regimes. A novel approach for the search of the roots of the retarded time function was developed based on considerations of the kinematics of rotating sources and of the bifurcation analysis of the retarded time function. It is shown that the proposed algorithm is faster than the classical Newton and Brent methods, especially in the presence of sources rotating supersonically.
Das, Arpita; Bhattacharya, Mahua
2011-01-01
In the present work, authors have developed a treatment planning system implementing genetic based neuro-fuzzy approaches for accurate analysis of shape and margin of tumor masses appearing in breast using digital mammogram. It is obvious that a complicated structure invites the problem of over learning and misclassification. In proposed methodology, genetic algorithm (GA) has been used for searching of effective input feature vectors combined with adaptive neuro-fuzzy model for final classification of different boundaries of tumor masses. The study involves 200 digitized mammograms from MIAS and other databases and has shown 86% correct classification rate.
The vendor/laboratory manager relationship: some practical negotiation tips.
Bickford, G R
1993-01-01
We negotiate practically every minute of the day with ourselves, as well as with spouses or loved ones, family members, friends, bosses, and coworkers. Skilled negotiators search for the common good, present accurate information, create alternatives, and strive for agreements that are fair to all concerned. Those who use misinformation and manipulation to win their short-term positions fail to build long-term relationships. Developing a positive attitude toward negotiating involves experience, recognizing the negotiating mechanism, evaluating decisions, and correctly determining when to stop negotiating and move on. Negotiations between suppliers and laboratory managers are used in this article to illustrate these processes.
Features of πΔ photoproduction at high energies
Nys, Jannes; Mathieu, V.; Fernandez-Ramirez, C.; ...
2018-02-02
Hybrid/exotic meson spectroscopy searches at Jefferson Lab require the accurate theoretical description of the production mechanism in peripheral photoproduction. We develop a model for πΔ photoproduction at high energies (5 ≤ E lab ≤ 16 GeV) that incorporates both the absorbed pion and natural-parity cut contributions. We fit the available observables, providing a good description of the energy and angular dependencies of the experimental data. In conclusion, we also provide predictions for the photon beam asymmetry of charged pions at E lab = 9 GeV which is expected to be measured by GlueX and CLAS12 experiments in the near future.
Features of πΔ photoproduction at high energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nys, Jannes; Mathieu, V.; Fernandez-Ramirez, C.
Hybrid/exotic meson spectroscopy searches at Jefferson Lab require the accurate theoretical description of the production mechanism in peripheral photoproduction. We develop a model for πΔ photoproduction at high energies (5 ≤ E lab ≤ 16 GeV) that incorporates both the absorbed pion and natural-parity cut contributions. We fit the available observables, providing a good description of the energy and angular dependencies of the experimental data. In conclusion, we also provide predictions for the photon beam asymmetry of charged pions at E lab = 9 GeV which is expected to be measured by GlueX and CLAS12 experiments in the near future.
Allones, J L; Martinez, D; Taboada, M
2014-10-01
Clinical terminologies are considered a key technology for capturing clinical data in a precise and standardized manner, which is critical to accurately exchange information among different applications, medical records and decision support systems. An important step to promote the real use of clinical terminologies, such as SNOMED-CT, is to facilitate the process of finding mappings between local terms of medical records and concepts of terminologies. In this paper, we propose a mapping tool to discover text-to-concept mappings in SNOMED-CT. Name-based techniques were combined with a query expansion system to generate alternative search terms, and with a strategy to analyze and take advantage of the semantic relationships of the SNOMED-CT concepts. The developed tool was evaluated and compared to the search services provided by two SNOMED-CT browsers. Our tool automatically mapped clinical terms from a Spanish glossary of procedures in pathology with 88.0% precision and 51.4% recall, providing a substantial improvement of recall (28% and 60%) over other publicly accessible mapping services. The improvements reached by the mapping tool are encouraging. Our results demonstrate the feasibility of accurately mapping clinical glossaries to SNOMED-CT concepts, by means a combination of structural, query expansion and named-based techniques. We have shown that SNOMED-CT is a great source of knowledge to infer synonyms for the medical domain. Results show that an automated query expansion system overcomes the challenge of vocabulary mismatch partially.
Identification of "Known Unknowns" Utilizing Accurate Mass Data and ChemSpider
NASA Astrophysics Data System (ADS)
Little, James L.; Williams, Antony J.; Pshenichnov, Alexey; Tkachenko, Valery
2012-01-01
In many cases, an unknown to an investigator is actually known in the chemical literature, a reference database, or an internet resource. We refer to these types of compounds as "known unknowns." ChemSpider is a very valuable internet database of known compounds useful in the identification of these types of compounds in commercial, environmental, forensic, and natural product samples. The database contains over 26 million entries from hundreds of data sources and is provided as a free resource to the community. Accurate mass mass spectrometry data is used to query the database by either elemental composition or a monoisotopic mass. Searching by elemental composition is the preferred approach. However, it is often difficult to determine a unique elemental composition for compounds with molecular weights greater than 600 Da. In these cases, searching by the monoisotopic mass is advantageous. In either case, the search results are refined by sorting the number of references associated with each compound in descending order. This raises the most useful candidates to the top of the list for further evaluation. These approaches were shown to be successful in identifying "known unknowns" noted in our laboratory and for compounds of interest to others.
A Review of Safety and Design Requirements of the Artificial Pancreas.
Blauw, Helga; Keith-Hynes, Patrick; Koops, Robin; DeVries, J Hans
2016-11-01
As clinical studies with artificial pancreas systems for automated blood glucose control in patients with type 1 diabetes move to unsupervised real-life settings, product development will be a focus of companies over the coming years. Directions or requirements regarding safety in the design of an artificial pancreas are, however, lacking. This review aims to provide an overview and discussion of safety and design requirements of the artificial pancreas. We performed a structured literature search based on three search components-type 1 diabetes, artificial pancreas, and safety or design-and extended the discussion with our own experiences in developing artificial pancreas systems. The main hazards of the artificial pancreas are over- and under-dosing of insulin and, in case of a bi-hormonal system, of glucagon or other hormones. For each component of an artificial pancreas and for the complete system we identified safety issues related to these hazards and proposed control measures. Prerequisites that enable the control algorithms to provide safe closed-loop control are accurate and reliable input of glucose values, assured hormone delivery and an efficient user interface. In addition, the system configuration has important implications for safety, as close cooperation and data exchange between the different components is essential.
Loughlin, Kevin R
2016-11-01
The controversy surrounding the relationship between testosterone and prostate cancer has existed for decades. The literature surrounding this topic is confusing and at times contradictory. There is no level-one quality evidence that confirms or refutes the relationship between either high or low serum testosterone levels and the subsequent development of prostate cancer. This commentary aims to review the issues involved and to provide an interpretation as to the causes of the confusion and to provide a framework for ongoing discussion and investigation. A Medline and PubMed search was conducted using search terms: testosterone levels and prostate cancer to identify pertinent literature. There is no consistent evidence that a single testosterone level is predictive of prostate cancer risk. The development of prostate cancer is a complex biologic process potentially involving genetics,dietary, life style and hormonal factors. Serum testosterone levels do not accurately reflect the internal prostatic milieu. Finally, if testosterone levels are to be considered in the etiology of prostate cancer they should be measured and interpreted on a chronic basis with multiple measurements over a period of years. Copyright © 2016 Elsevier Inc. All rights reserved.
Rainey, Linda; van Nispen, Ruth; van der Zee, Carlijn; van Rens, Ger
2014-12-01
To critically appraise the measurement properties of questionnaires measuring participation in children and adolescents (0-18 years) with a disability. Bibliographic databases were searched for studies evaluating the measurement properties of self-report or parent-report questionnaires measuring participation in children and adolescents (0-18 years) with a disability. The methodological quality of the included studies and the results of the measurement properties were evaluated using a checklist developed on consensus-based standards. The search strategy identified 3,977 unique publications, of which 22 were selected; these articles evaluated the development and measurement properties of eight different questionnaires. The Child and Adolescent Scale of Participation was evaluated most extensively, generally showing moderate positive results on content validity, internal consistency, reliability and construct validity. The remaining questionnaires also demonstrated positive results. However, at least 50 % of the measurement properties per questionnaire were not (or only poorly) assessed. Studies of high methodological quality, using modern statistical methods, are needed to accurately assess the measurement properties of currently available questionnaires. Moreover, consensus is required on the definition of the construct 'participation' to determine content validity and to enable meaningful interpretation of outcomes.
New generation of the multimedia search engines
NASA Astrophysics Data System (ADS)
Mijes Cruz, Mario Humberto; Soto Aldaco, Andrea; Maldonado Cano, Luis Alejandro; López Rodríguez, Mario; Rodríguez Vázqueza, Manuel Antonio; Amaya Reyes, Laura Mariel; Cano Martínez, Elizabeth; Pérez Rosas, Osvaldo Gerardo; Rodríguez Espejo, Luis; Flores Secundino, Jesús Abimelek; Rivera Martínez, José Luis; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Sánchez Valenzuela, Juan Carlos; Montoya Obeso, Abraham; Ramírez Acosta, Alejandro Álvaro
2016-09-01
Current search engines are based upon search methods that involve the combination of words (text-based search); which has been efficient until now. However, the Internet's growing demand indicates that there's more diversity on it with each passing day. Text-based searches are becoming limited, as most of the information on the Internet can be found in different types of content denominated multimedia content (images, audio files, video files). Indeed, what needs to be improved in current search engines is: search content, and precision; as well as an accurate display of expected search results by the user. Any search can be more precise if it uses more text parameters, but it doesn't help improve the content or speed of the search itself. One solution is to improve them through the characterization of the content for the search in multimedia files. In this article, an analysis of the new generation multimedia search engines is presented, focusing the needs according to new technologies. Multimedia content has become a central part of the flow of information in our daily life. This reflects the necessity of having multimedia search engines, as well as knowing the real tasks that it must comply. Through this analysis, it is shown that there are not many search engines that can perform content searches. The area of research of multimedia search engines of new generation is a multidisciplinary area that's in constant growth, generating tools that satisfy the different needs of new generation systems.
NASA Astrophysics Data System (ADS)
Huang, Chuan; Guo, Peng; Yang, Aiying; Qiao, Yaojun
2018-07-01
In single channel systems, the nonlinear phase noise only comes from the channel itself through self-phase modulation (SPM). In this paper, a fast-nonlinear effect estimation method is proposed based on fractional Fourier transformation (FrFT). The nonlinear phase noise caused by Self-phase modulation effect is accurately estimated for single model 10Gbaud OOK and RZ-QPSK signals with the fiber length range of 0-200 km and the launch power range of 1-10 mW. The pulse windowing is adopted to search the optimum fractional order for the OOK and RZ-QPSK signals. Since the nonlinear phase shift caused by the SPM effect is very small, the accurate optimum fractional order of the signal cannot be found based on the traditional method. In this paper, a new method magnifying the phase shift is proposed to get the accurate optimum order and thus the nonlinear phase shift is calculated. The simulation results agree with the theoretical analysis and the method is applicable to signals whose pulse type has the similar characteristics with Gaussian pulse.
Science, technology and mission design for LATOR experiment
NASA Astrophysics Data System (ADS)
Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth L.
2017-11-01
The Laser Astrometric Test of Relativity (LATOR) is a Michelson-Morley-type experiment designed to test the Einstein's general theory of relativity in the most intense gravitational environment available in the solar system - the close proximity to the Sun. By using independent time-series of highly accurate measurements of the Shapiro time-delay (laser ranging accurate to 1 cm) and interferometric astrometry (accurate to 0.1 picoradian), LATOR will measure gravitational deflection of light by the solar gravity with accuracy of 1 part in a billion, a factor {30,000 better than currently available. LATOR will perform series of highly-accurate tests of gravitation and cosmology in its search for cosmological remnants of scalar field in the solar system. We present science, technology and mission design for the LATOR mission.
Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.
Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting
2018-02-12
Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.
Low background screening capability in the UK
NASA Astrophysics Data System (ADS)
Ghag, Chamkaur
2015-08-01
Low background rare event searches in underground laboratories seeking observation of direct dark matter interactions or neutrino-less double beta decay have the potential to profoundly advance our understanding of the physical universe. Successful results from these experiments depend critically on construction from extremely radiologically clean materials and accurate knowledge of subsequent low levels of expected background. The experiments must conduct comprehensive screening campaigns to reduce radioactivity from detector components, and these measurements also inform detailed characterisation and quantification of background sources and their impact, necessary to assign statistical significance to any potential discovery. To provide requisite sensitivity for material screening and characterisation in the UK to support our rare event search activities, we have re-developed our infrastructure to add ultra-low background capability across a range of complementary techniques that collectively allow complete radioactivity measurements. Ultra-low background HPGe and BEGe detectors have been installed at the Boulby Underground Laboratory, itself undergoing substantial facility re-furbishment, to provide high sensitivity gamma spectroscopy in particular for measuring the uranium and thorium decay series products. Dedicated low-activity mass spectrometry instrumentation has been developed at UCL for part per trillion level contaminant identification to complement underground screening with direct U and Th measurements, and meet throughput demands. Finally, radon emanation screening at UCL measures radon background inaccessible to gamma or mass spectrometry techniques. With this new capability the UK is delivering half of the radioactivity screening for the LZ dark matter search experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghag, Chamkaur
Low background rare event searches in underground laboratories seeking observation of direct dark matter interactions or neutrino-less double beta decay have the potential to profoundly advance our understanding of the physical universe. Successful results from these experiments depend critically on construction from extremely radiologically clean materials and accurate knowledge of subsequent low levels of expected background. The experiments must conduct comprehensive screening campaigns to reduce radioactivity from detector components, and these measurements also inform detailed characterisation and quantification of background sources and their impact, necessary to assign statistical significance to any potential discovery. To provide requisite sensitivity for material screeningmore » and characterisation in the UK to support our rare event search activities, we have re-developed our infrastructure to add ultra-low background capability across a range of complementary techniques that collectively allow complete radioactivity measurements. Ultra-low background HPGe and BEGe detectors have been installed at the Boulby Underground Laboratory, itself undergoing substantial facility re-furbishment, to provide high sensitivity gamma spectroscopy in particular for measuring the uranium and thorium decay series products. Dedicated low-activity mass spectrometry instrumentation has been developed at UCL for part per trillion level contaminant identification to complement underground screening with direct U and Th measurements, and meet throughput demands. Finally, radon emanation screening at UCL measures radon background inaccessible to gamma or mass spectrometry techniques. With this new capability the UK is delivering half of the radioactivity screening for the LZ dark matter search experiment.« less
A New Single-Step PCR Assay for the Detection of the Zoonotic Malaria Parasite Plasmodium knowlesi
Lucchi, Naomi W.; Poorak, Mitra; Oberstaller, Jenna; DeBarry, Jeremy; Srinivasamoorthy, Ganesh; Goldman, Ira; Xayavong, Maniphet; da Silva, Alexandre J.; Peterson, David S.; Barnwell, John W.; Kissinger, Jessica; Udhayakumar, Venkatachalam
2012-01-01
Background Recent studies in Southeast Asia have demonstrated substantial zoonotic transmission of Plasmodium knowlesi to humans. Microscopically, P. knowlesi exhibits several stage-dependent morphological similarities to P. malariae and P. falciparum. These similarities often lead to misdiagnosis of P. knowlesi as either P. malariae or P. falciparum and PCR-based molecular diagnostic tests are required to accurately detect P. knowlesi in humans. The most commonly used PCR test has been found to give false positive results, especially with a proportion of P. vivax isolates. To address the need for more sensitive and specific diagnostic tests for the accurate diagnosis of P. knowlesi, we report development of a new single-step PCR assay that uses novel genomic targets to accurately detect this infection. Methodology and Significant Findings We have developed a bioinformatics approach to search the available malaria parasite genome database for the identification of suitable DNA sequences relevant for molecular diagnostic tests. Using this approach, we have identified multi-copy DNA sequences distributed in the P. knowlesi genome. We designed and tested several novel primers specific to new target sequences in a single-tube, non-nested PCR assay and identified one set of primers that accurately detects P. knowlesi. We show that this primer set has 100% specificity for the detection of P. knowlesi using three different strains (Nuri, H, and Hackeri), and one human case of malaria caused by P. knowlesi. This test did not show cross reactivity with any of the four human malaria parasite species including 11 different strains of P. vivax as well as 5 additional species of simian malaria parasites. Conclusions The new PCR assay based on novel P. knowlesi genomic sequence targets was able to accurately detect P. knowlesi. Additional laboratory and field-based testing of this assay will be necessary to further validate its utility for clinical diagnosis of P. knowlesi. PMID:22363751
NASA Technical Reports Server (NTRS)
Albornoz, Caleb Ronald
2012-01-01
Thousands of millions of documents are stored and updated daily in the World Wide Web. Most of the information is not efficiently organized to build knowledge from the stored data. Nowadays, search engines are mainly used by users who rely on their skills to look for the information needed. This paper presents different techniques search engine users can apply in Google Search to improve the relevancy of search results. According to the Pew Research Center, the average person spends eight hours a month searching for the right information. For instance, a company that employs 1000 employees wastes $2.5 million dollars on looking for nonexistent and/or not found information. The cost is very high because decisions are made based on the information that is readily available to use. Whenever the information necessary to formulate an argument is not available or found, poor decisions may be made and mistakes will be more likely to occur. Also, the survey indicates that only 56% of Google users feel confident with their current search skills. Moreover, just 76% of the information that is available on the Internet is accurate.
Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization
Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050
BJUT at TREC 2015 Microblog Track: Real-Time Filtering Using Non-negative Matrix Factorization
2015-11-20
information to extend the query, al- leviates the problem of concept drift in query expansion. In User profiles Twitter Google Bing accurate ambiguity...index as the query expansion document set; second- ly,put the interest file in twitter search energy to get back the relevant twetts, the interest in...for clustering is demonstrated in Figure 2. We will be the result of the search energy Twitter as the original expression of interest, the initial
Selection, Evaluation, and Rating of Compact Heat Exchangers v. 1.006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, Matthew D.
2016-11-09
SEARCH determines and optimizes the design of a compact heat exchanger for specified process conditions. The user specifies process boundary conditions including the fluid state and flow rate and SEARCH will determine the optimum flow arrangement, channel geometry, and mechanical design for the unit. Fluids are modeled using NIST Refprop or tabulated values. A variety of thermal-hydraulic correlations are available including user-defined equations to accurately capture the heat transfer and pressure drop behavior of the process flows.
Analytic Calculation of Noise Power Robbing, NPR, and Polarization Isolation Degradation
NASA Technical Reports Server (NTRS)
Peters, Robert; Woolner, Peter; Ekelman, Ernest
2008-01-01
Three Geostationary Operational Environmental Satellite (GOES) R transponders (services) required analysis and measurements to develop an accurate link budget. These are a) Search and Rescue transponder which suffers from power robbing due to thermal uplink noise, b) the Data Collection Platform Report which suffers from degradation due to NPR (Noise Power Ratio), and c) GOES Rebroadcast transponder which uses a dual circular downlink L band for which there was no depolarization data. The first two services required development of extended link budget to analytically calculate the impact of these degradations which are shown to have a significant impact on the link budget. The third service required measurements of atmospheric L band CP depolarization as there were no known previous measurements and results are reported her
Predicting consumer behavior with Web search.
Goel, Sharad; Hofman, Jake M; Lahaie, Sébastien; Pennock, David M; Watts, Duncan J
2010-10-12
Recent work has demonstrated that Web search volume can "predict the present," meaning that it can be used to accurately track outcomes such as unemployment levels, auto and home sales, and disease prevalence in near real time. Here we show that what consumers are searching for online can also predict their collective future behavior days or even weeks in advance. Specifically we use search query volume to forecast the opening weekend box-office revenue for feature films, first-month sales of video games, and the rank of songs on the Billboard Hot 100 chart, finding in all cases that search counts are highly predictive of future outcomes. We also find that search counts generally boost the performance of baseline models fit on other publicly available data, where the boost varies from modest to dramatic, depending on the application in question. Finally, we reexamine previous work on tracking flu trends and show that, perhaps surprisingly, the utility of search data relative to a simple autoregressive model is modest. We conclude that in the absence of other data sources, or where small improvements in predictive performance are material, search queries provide a useful guide to the near future.
Crowded visual search in children with normal vision and children with visual impairment.
Huurneman, Bianca; Cox, Ralf F A; Vlaskamp, Björn N S; Boonstra, F Nienke
2014-03-01
This study investigates the influence of oculomotor control, crowding, and attentional factors on visual search in children with normal vision ([NV], n=11), children with visual impairment without nystagmus ([VI-nys], n=11), and children with VI with accompanying nystagmus ([VI+nys], n=26). Exclusion criteria for children with VI were: multiple impairments and visual acuity poorer than 20/400 or better than 20/50. Three search conditions were presented: a row with homogeneous distractors, a matrix with homogeneous distractors, and a matrix with heterogeneous distractors. Element spacing was manipulated in 5 steps from 2 to 32 minutes of arc. Symbols were sized 2 times the threshold acuity to guarantee visibility for the VI groups. During simple row and matrix search with homogeneous distractors children in the VI+nys group were less accurate than children with NV at smaller spacings. Group differences were even more pronounced during matrix search with heterogeneous distractors. Search times were longer in children with VI compared to children with NV. The more extended impairments during serial search reveal greater dependence on oculomotor control during serial compared to parallel search. Copyright © 2014 Elsevier B.V. All rights reserved.
Predicting consumer behavior with Web search
Goel, Sharad; Hofman, Jake M.; Lahaie, Sébastien; Pennock, David M.; Watts, Duncan J.
2010-01-01
Recent work has demonstrated that Web search volume can “predict the present,” meaning that it can be used to accurately track outcomes such as unemployment levels, auto and home sales, and disease prevalence in near real time. Here we show that what consumers are searching for online can also predict their collective future behavior days or even weeks in advance. Specifically we use search query volume to forecast the opening weekend box-office revenue for feature films, first-month sales of video games, and the rank of songs on the Billboard Hot 100 chart, finding in all cases that search counts are highly predictive of future outcomes. We also find that search counts generally boost the performance of baseline models fit on other publicly available data, where the boost varies from modest to dramatic, depending on the application in question. Finally, we reexamine previous work on tracking flu trends and show that, perhaps surprisingly, the utility of search data relative to a simple autoregressive model is modest. We conclude that in the absence of other data sources, or where small improvements in predictive performance are material, search queries provide a useful guide to the near future. PMID:20876140
Mobile Visual Search Based on Histogram Matching and Zone Weight Learning
NASA Astrophysics Data System (ADS)
Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong
2018-01-01
In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.
Sabounchi, Nasim S.; Rahmandad, Hazhir; Ammerman, Alice
2014-01-01
Basal Metabolic Rate (BMR) represents the largest component of total energy expenditure and is a major contributor to energy balance. Therefore, accurately estimating BMR is critical for developing rigorous obesity prevention and control strategies. Over the past several decades, numerous BMR formulas have been developed targeted to different population groups. A comprehensive literature search revealed 248 BMR estimation equations developed using diverse ranges of age, gender, race, fat free mass, fat mass, height, waist-to-hip ratio, body mass index, and weight. A subset of 47 studies included enough detail to allow for development of meta-regression equations. Utilizing these studies, meta-equations were developed targeted to twenty specific population groups. This review provides a comprehensive summary of available BMR equations and an estimate of their accuracy. An accompanying online BMR prediction tool (available at http://www.sdl.ise.vt.edu/tutorials.html) was developed to automatically estimate BMR based on the most appropriate equation after user-entry of individual age, race, gender, and weight. PMID:23318720
Peeters, Geeske; Barker, Anna L; Talevski, Jason; Ackerman, Ilana; Ayton, Darshini R; Reid, Christopher; Evans, Sue M; Stoelwinder, Johannes U; McNeil, John J
2018-05-01
Patient-reported outcome measures (PROMs) capture health information from the patient's perspective that can be used when weighing up benefits, risks and costs of treatment. This is important for elective procedures such as those for coronary revascularisation. Patients should be involved in the development of PROMs to accurately capture outcomes that are important for the patient. The aims of this review are to identify if patients were involved in the development of cardiovascular-specific PROMs used for assessing outcomes from elective coronary revascularisation, and to explore what methods were used to capture patient perspectives. PROMs for evaluating outcomes from elective coronary revascularisation were identified from a previous review and an updated systematic search. The studies describing the development of the PROMs were reviewed for information on patient input in their conceptual and/or item development. 24 PROMs were identified from a previous review and three additional PROMs were identified from the updated search. Full texts were obtained for 26 of the 27 PROMs. The 26 studies (11 multidimensional, 15 unidimensional) were reviewed. Only nine studies reported developing PROMs using patient input. For eight PROMs, the inclusion of patient input could not be judged due to insufficient information in the full text. Only nine of the 26 reviewed PROMs used in elective coronary revascularisation reported involving patients in their conceptual and/or item development, while patient input was unclear for eight PROMs. These findings suggest that the patient's perspective is often overlooked or poorly described in the development of PROMs.
Separating astrophysical sources from indirect dark matter signals
Siegal-Gaskins, Jennifer M.
2015-01-01
Indirect searches for products of dark matter annihilation and decay face the challenge of identifying an uncertain and subdominant signal in the presence of uncertain backgrounds. Two valuable approaches to this problem are (i) using analysis methods which take advantage of different features in the energy spectrum and angular distribution of the signal and backgrounds and (ii) more accurately characterizing backgrounds, which allows for more robust identification of possible signals. These two approaches are complementary and can be significantly strengthened when used together. I review the status of indirect searches with gamma rays using two promising targets, the Inner Galaxy and the isotropic gamma-ray background. For both targets, uncertainties in the properties of backgrounds are a major limitation to the sensitivity of indirect searches. I then highlight approaches which can enhance the sensitivity of indirect searches using these targets. PMID:25304638
Wang, Xingmei; Liu, Shu; Liu, Zhipeng
2017-01-01
This paper proposes a combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm to detect underwater objects in sonar images. Specifically, for the first time, the problem of inappropriate filtering degree parameter which commonly occurs in non-local spatial information and seriously affects the denoising performance in sonar images, was solved with the method utilizing a novel filtering degree parameter. Then, a quantum-inspired shuffled frog leaping algorithm based on new search mechanism (QSFLA-NSM) is proposed to precisely and quickly detect sonar images. Each frog individual is directly encoded by real numbers, which can greatly simplify the evolution process of the quantum-inspired shuffled frog leaping algorithm (QSFLA). Meanwhile, a fitness function combining intra-class difference with inter-class difference is adopted to evaluate frog positions more accurately. On this basis, recurring to an analysis of the quantum-behaved particle swarm optimization (QPSO) and the shuffled frog leaping algorithm (SFLA), a new search mechanism is developed to improve the searching ability and detection accuracy. At the same time, the time complexity is further reduced. Finally, the results of comparative experiments using the original sonar images, the UCI data sets and the benchmark functions demonstrate the effectiveness and adaptability of the proposed method.
Designing a practical system for spectral imaging of skylight.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Lee, Raymond L
2005-09-20
In earlier work [J. Opt. Soc. Am. A 21, 13-23 (2004)], we showed that a combination of linear models and optimum Gaussian sensors obtained by an exhaustive search can recover daylight spectra reliably from broadband sensor data. Thus our algorithm and sensors could be used to design an accurate, relatively inexpensive system for spectral imaging of daylight. Here we improve our simulation of the multispectral system by (1) considering the different kinds of noise inherent in electronic devices such as change-coupled devices (CCDs) or complementary metal-oxide semiconductors (CMOS) and (2) extending our research to a different kind of natural illumination, skylight. Because exhaustive searches are expensive computationally, here we switch to a simulated annealing algorithm to define the optimum sensors for recovering skylight spectra. The annealing algorithm requires us to minimize a single cost function, and so we develop one that calculates both the spectral and colorimetric similarity of any pair of skylight spectra. We show that the simulated annealing algorithm yields results similar to the exhaustive search but with much less computational effort. Our technique lets us study the properties of optimum sensors in the presence of noise, one side effect of which is that adding more sensors may not improve the spectral recovery.
Liu, Zhipeng
2017-01-01
This paper proposes a combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm to detect underwater objects in sonar images. Specifically, for the first time, the problem of inappropriate filtering degree parameter which commonly occurs in non-local spatial information and seriously affects the denoising performance in sonar images, was solved with the method utilizing a novel filtering degree parameter. Then, a quantum-inspired shuffled frog leaping algorithm based on new search mechanism (QSFLA-NSM) is proposed to precisely and quickly detect sonar images. Each frog individual is directly encoded by real numbers, which can greatly simplify the evolution process of the quantum-inspired shuffled frog leaping algorithm (QSFLA). Meanwhile, a fitness function combining intra-class difference with inter-class difference is adopted to evaluate frog positions more accurately. On this basis, recurring to an analysis of the quantum-behaved particle swarm optimization (QPSO) and the shuffled frog leaping algorithm (SFLA), a new search mechanism is developed to improve the searching ability and detection accuracy. At the same time, the time complexity is further reduced. Finally, the results of comparative experiments using the original sonar images, the UCI data sets and the benchmark functions demonstrate the effectiveness and adaptability of the proposed method. PMID:28542266
Pedraza, Dixis Figueroa; de Menezes, Tarciana Nobre
2016-01-01
Abstract Objective: To obtain an overview of available information on the anthropometric assessment of Brazilian children attending daycare centers. Data source: A literature search was carried out in the PubMed, LILACS and SciELO databases of studies published from 1990 to 2013 in Portuguese and English languages. The following search strategy was used: (nutritional status OR anthropometrics OR malnutrition OR overweight) AND daycare centers, as well as the equivalent terms in Portuguese. In the case of MEDLINE search, the descriptor Brazil was also used. Data synthesis: It was verified that the 33 studies included in the review were comparable from a methodological point of view. The studies, in general, were characterized by their restrictive nature, geographical concentration and dispersion of results in relation to time. Considering the studies published from 2010 onwards, low prevalence of acute malnutrition and significant rates of stunting and overweight were observed. Conclusions: Despite the limitations, considering the most recent studies that used the WHO growth curves (2006), it is suggested that the anthropometric profile of Brazilian children attending daycare centers is characterized by a nutritional transition process, with significant prevalence of overweight and short stature. We emphasize the need to develop a multicenter survey that will more accurately define the current anthropometric nutritional status of Brazilian children attending daycare centers. PMID:26553574
Raj, S.; Sharma, V. L.; Singh, A. J.; Goel, S.
2016-01-01
Background. The available health information on websites should be reliable and accurate in order to make informed decisions by community. This study was done to assess the quality and readability of health information websites on World Wide Web in India. Methods. This cross-sectional study was carried out in June 2014. The key words “Health” and “Information” were used on search engines “Google” and “Yahoo.” Out of 50 websites (25 from each search engines), after exclusion, 32 websites were evaluated. LIDA tool was used to assess the quality whereas the readability was assessed using Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), and SMOG. Results. Forty percent of websites (n = 13) were sponsored by government. Health On the Net Code of Conduct (HONcode) certification was present on 50% (n = 16) of websites. The mean LIDA score (74.31) was average. Only 3 websites scored high on LIDA score. Only five had readability scores at recommended sixth-grade level. Conclusion. Most health information websites had average quality especially in terms of usability and reliability and were written at high readability levels. Efforts are needed to develop the health information websites which can help general population in informed decision making. PMID:27119025
Design of the VISITOR Tool: A Versatile ImpulSive Interplanetary Trajectory OptimizeR
NASA Technical Reports Server (NTRS)
Corpaccioli, Luca; Linskens, Harry; Komar, David R.
2014-01-01
The design of trajectories for interplanetary missions represents one of the most complex and important problems to solve during conceptual space mission design. To facilitate conceptual mission sizing activities, it is essential to obtain sufficiently accurate trajectories in a fast and repeatable manner. To this end, the VISITOR tool was developed. This tool modularly augments a patched conic MGA-1DSM model with a mass model, launch window analysis, and the ability to simulate more realistic arrival and departure operations. This was implemented in MATLAB, exploiting the built-in optimization tools and vector analysis routines. The chosen optimization strategy uses a grid search and pattern search, an iterative variable grid method. A genetic algorithm can be selectively used to improve search space pruning, at the cost of losing the repeatability of the results and increased computation time. The tool was validated against seven flown missions: the average total mission (Delta)V offset from the nominal trajectory was 9.1%, which was reduced to 7.3% when using the genetic algorithm at the cost of an increase in computation time by a factor 5.7. It was found that VISITOR was well-suited for the conceptual design of interplanetary trajectories, while also facilitating future improvements due to its modular structure.
A fresh approach to forecasting in astroparticle physics and dark matter searches
NASA Astrophysics Data System (ADS)
Edwards, Thomas D. P.; Weniger, Christoph
2018-02-01
We present a toolbox of new techniques and concepts for the efficient forecasting of experimental sensitivities. These are applicable to a large range of scenarios in (astro-)particle physics, and based on the Fisher information formalism. Fisher information provides an answer to the question 'what is the maximum extractable information from a given observation?'. It is a common tool for the forecasting of experimental sensitivities in many branches of science, but rarely used in astroparticle physics or searches for particle dark matter. After briefly reviewing the Fisher information matrix of general Poisson likelihoods, we propose very compact expressions for estimating expected exclusion and discovery limits ('equivalent counts method'). We demonstrate by comparison with Monte Carlo results that they remain surprisingly accurate even deep in the Poisson regime. We show how correlated background systematics can be efficiently accounted for by a treatment based on Gaussian random fields. Finally, we introduce the novel concept of Fisher information flux. It can be thought of as a generalization of the commonly used signal-to-noise ratio, while accounting for the non-local properties and saturation effects of background and instrumental uncertainties. It is a powerful and flexible tool ready to be used as core concept for informed strategy development in astroparticle physics and searches for particle dark matter.
Roesler, Elizabeth L.; Grabowski, Timothy B.
2018-01-01
Developing effective monitoring methods for elusive, rare, or patchily distributed species requires extra considerations, such as imperfect detection. Although detection is frequently modeled, the opportunity to assess it empirically is rare, particularly for imperiled species. We used Pecos assiminea (Assiminea pecos), an endangered semiaquatic snail, as a case study to test detection and accuracy issues surrounding quadrat searches. Quadrats (9 × 20 cm; n = 12) were placed in suitable Pecos assiminea habitat and randomly assigned a treatment, defined as the number of empty snail shells (0, 3, 6, or 9). Ten observers rotated through each quadrat, conducting 5-min visual searches for shells. The probability of detecting a shell when present was 67.4 ± 3.0%, but it decreased with the increasing litter depth and fewer number of shells present. The mean (± SE) observer accuracy was 25.5 ± 4.3%. Accuracy was positively correlated to the number of shells in the quadrat and negatively correlated to the number of times a quadrat was searched. The results indicate quadrat surveys likely underrepresent true abundance, but accurately determine the presence or absence. Understanding detection and accuracy of elusive, rare, or imperiled species improves density estimates and aids in monitoring and conservation efforts.
Kamrava, Brandon; Roehm, Pamela C
2017-08-01
Objective To systematically review the anatomy of the ossicular chain. Data Sources Google Scholar, PubMed, and otologic textbooks. Review Methods A systematic literature search was performed on January 26, 2015. Search terms used to discover articles consisted of combinations of 2 keywords. One keyword from both groups was used: [ ossicular, ossicle, malleus, incus, stapes] and [ morphology, morphometric, anatomy, variation, physiology], yielding more than 50,000 hits. Articles were then screened by title and abstract if they did not contain information relevant to human ossicular chain anatomy. In addition to this search, references of selected articles were studied as well as suggested relevant articles from publication databases. Standard otologic textbooks were screened using the search criteria. Results Thirty-three sources were selected for use in this review. From these studies, data on the composition, physiology, morphology, and morphometrics were acquired. In addition, any correlations or lack of correlations between features of the ossicular chain and other features of the ossicular chain or patient were noted, with bilateral symmetry between ossicles being the only important correlation reported. Conclusion There was significant variation in all dimensions of each ossicle between individuals, given that degree of variation, custom fitting, or custom manufacturing of prostheses for each patient could optimize prosthesis fit. From published data, an accurate 3-dimensional model of the malleus, incus, and stapes can be created, which can then be further modified for each patient's individual anatomy.
Egnos-Based Multi-Sensor Accurate and Reliable Navigation in Search-And Missions with Uavs
NASA Astrophysics Data System (ADS)
Molina, P.; Colomina, I.; Vitoria, T.; Silva, P. F.; Stebler, Y.; Skaloud, J.; Kornus, W.; Prades, R.
2011-09-01
This paper will introduce and describe the goals, concept and overall approach of the European 7th Framework Programme's project named CLOSE-SEARCH, which stands for 'Accurate and safe EGNOS-SoL Navigation for UAV-based low-cost SAR operations'. The goal of CLOSE-SEARCH is to integrate in a helicopter-type unmanned aerial vehicle, a thermal imaging sensor and a multi-sensor navigation system (based on the use of a Barometric Altimeter (BA), a Magnetometer (MAGN), a Redundant Inertial Navigation System (RINS) and an EGNOS-enabled GNSS receiver) with an Autonomous Integrity Monitoring (AIM) capability, to support the search component of Search-And-Rescue operations in remote, difficult-to-access areas and/or in time critical situations. The proposed integration will result in a hardware and software prototype that will demonstrate an end-to-end functionality, that is to fly in patterns over a region of interest (possibly inaccessible) during day or night and also under adverse weather conditions and locate there disaster survivors or lost people through the detection of the body heat. This paper will identify the technical challenges of the proposed approach, from navigating with a BA/MAGN/RINS/GNSS-EGNOSbased integrated system to the interpretation of thermal images for person identification. Moreover, the AIM approach will be described together with the proposed integrity requirements. Finally, this paper will show some results obtained in the project during the first test campaign performed on November 2010. On that day, a prototype was flown in three different missions to assess its high-level performance and to observe some fundamental mission parameters as the optimal flying height and flying speed to enable body recognition. The second test campaign is scheduled for the end of 2011.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Zheng, E-mail: 19994035@sina.com; Wang, Jun; Zhou, Bihua
2014-03-15
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented tomore » tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.« less
Biosimilars: Considerations for Oncology Nurses .
Vizgirda, Vida; Jacobs, Ira
2017-04-01
Biosimilars are developed to be highly similar to and treat the same conditions as licensed biologics. As they are approved and their use becomes more widespread, oncology nurses should be aware of their development and unique considerations. This article reviews properties of biosimilars; their regulation and approval process; the ways in which their quality, safety, and efficacy are evaluated; their postmarketing safety monitoring; and their significance to oncology nurses and oncology nursing. . A search of PubMed and regulatory agency websites was conducted for references related to the development and use of biosimilars in oncology. . Because biologics are large, structurally complex molecules, biosimilars cannot be considered generic equivalents to licensed biologic products. Consequently, regulatory approval for biosimilars is different from approval for small-molecule generics. Oncology nurses are in a unique position to educate themselves, other clinicians, and patients and their families about biosimilars to ensure accurate understanding, as well as optimal and safe use, of biosimilars.
Economic Load Dispatch Using Adaptive Social Acceleration Constant Based Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Jain, N. K.; Nangia, Uma; Jain, Jyoti
2018-04-01
In this paper, an Adaptive Social Acceleration Constant based Particle Swarm Optimization (ASACPSO) has been developed which uses the best value of social acceleration constant (Csg). Three formulations of Csg have been used to search for the best value of Csg. These three formulations led to the development of three algorithms-ALDPSO, AELDPSO-I and AELDPSO-II which were implemented for Economic Load Dispatch of IEEE 5 bus, 14 bus and 30 bus systems. The best value of Csg was selected based on the minimum number of Kounts i.e. number of function evaluations required to minimize the function. This value of Csg was directly used in basic PSO algorithm which led to the development of ASACPSO algorithm. ASACPSO was found to converge faster and give more accurate results compared to BPSO for IEEE 5, 14 and 30 bus systems.
Finding Direction in the Search for Selection.
Thiltgen, Grant; Dos Reis, Mario; Goldstein, Richard A
2017-01-01
Tests for positive selection have mostly been developed to look for diversifying selection where change away from the current amino acid is often favorable. However, in many cases we are interested in directional selection where there is a shift toward specific amino acids, resulting in increased fitness in the species. Recently, a few methods have been developed to detect and characterize directional selection on a molecular level. Using the results of evolutionary simulations as well as HIV drug resistance data as models of directional selection, we compare two such methods with each other, as well as against a standard method for detecting diversifying selection. We find that the method to detect diversifying selection also detects directional selection under certain conditions. One method developed for detecting directional selection is powerful and accurate for a wide range of conditions, while the other can generate an excessive number of false positives.
Enhanced Sampling Methods for the Computation of Conformational Kinetics in Macromolecules
NASA Astrophysics Data System (ADS)
Grazioli, Gianmarc
Calculating the kinetics of conformational changes in macromolecules, such as proteins and nucleic acids, is still very much an open problem in theoretical chemistry and computational biophysics. If it were feasible to run large sets of molecular dynamics trajectories that begin in one configuration and terminate when reaching another configuration of interest, calculating kinetics from molecular dynamics simulations would be simple, but in practice, configuration spaces encompassing all possible configurations for even the simplest of macromolecules are far too vast for such a brute force approach. In fact, many problems related to searches of configuration spaces, such as protein structure prediction, are considered to be NP-hard. Two approaches to addressing this problem are to either develop methods for enhanced sampling of trajectories that confine the search to productive trajectories without loss of temporal information, or coarse-grained methodologies that recast the problem in reduced spaces that can be exhaustively searched. This thesis will begin with a description of work carried out in the vein of the second approach, where a Smoluchowski diffusion equation model was developed that accurately reproduces the rate vs. force relationship observed in the mechano-catalytic disulphide bond cleavage observed in thioredoxin-catalyzed reduction of disulphide bonds. Next, three different novel enhanced sampling methods developed in the vein of the first approach will be described, which can be employed either separately or in conjunction with each other to autonomously define a set of energetically relevant subspaces in configuration space, accelerate trajectories between the interfaces dividing the subspaces while preserving the distribution of unassisted transition times between subspaces, and approximate time correlation functions from the kinetic data collected from the transitions between interfaces.
Indexed variation graphs for efficient and accurate resistome profiling.
Rowe, Will P M; Winn, Martyn D
2018-05-14
Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.
Current Development at the Southern California Earthquake Data Center (SCEDC)
NASA Astrophysics Data System (ADS)
Appel, V. L.; Clayton, R. W.
2005-12-01
Over the past year, the SCEDC completed or is near completion of three featured projects: Station Information System (SIS) Development: The SIS will provide users with an interface into complete and accurate station metadata for all current and historic data at the SCEDC. The goal of this project is to develop a system that can interact with a single database source to enter, update and retrieve station metadata easily and efficiently. The system will provide accurate station/channel information for active stations to the SCSN real-time processing system, as will as station/channel information for stations that have parametric data at the SCEDC i.e., for users retrieving data via STP. Additionally, the SIS will supply information required to generate dataless SEED and COSMOS V0 volumes and allow stations to be added to the system with a minimum, but incomplete set of information using predefined defaults that can be easily updated as more information becomes available. Finally, the system will facilitate statewide metadata exchange for both real-time processing and provide a common approach to CISN historic station metadata. Moment Tensor Solutions: The SCEDC is currently archiving and delivering Moment Magnitudes and Moment Tensor Solutions (MTS) produced by the SCSN in real-time and post-processing solutions for events spanning back to 1999. The automatic MTS runs on all local events with magnitudes > 3.0, and all regional events > 3.5. The distributed solution automatically creates links from all USGS Simpson Maps to a text e-mail summary solution, creates a .gif image of the solution, and updates the moment tensor database tables at the SCEDC. Searchable Scanned Waveforms Site: The Caltech Seismological Lab has made available 12,223 scanned images of pre-digital analog recordings of major earthquakes recorded in Southern California between 1962 and 1992 at http://www.data.scec.org/research/scans/. The SCEDC has developed a searchable web interface that allows users to search the available files, select multiple files for download and then retrieve a zipped file containing the results. Scanned images of paper records for M>3.5 southern California earthquakes and several significant teleseisms are available for download via the SCEDC through this search tool.
Keenswijk, Werner; Vanmassenhove, Jill; Raes, Ann; Dhont, Evelyn; Vande Walle, Johan
2017-03-01
Diarrhea-associated hemolytic uremic syndrome (D+HUS) is a common thrombotic microangiopathy during childhood and early identification of parameters predicting poor outcome could enable timely intervention. This study aims to establish the accuracy of BUN-to-serum creatinine ratio at admission, in addition to other parameters in predicting the clinical course and outcome. Records were searched for children between 1 January 2008 and 1 January 2015 admitted with D+HUS. A complicated course was defined as developing one or more of the following: neurological dysfunction, pancreatitis, cardiac or pulmonary involvement, hemodynamic instability, and hematologic complications while poor outcome was defined by death or development of chronic kidney disease. Thirty-four children were included from which 11 with a complicated disease course/poor outcome. Risk of a complicated course/poor outcome was strongly associated with oliguria (p = 0.000006) and hypertension (p = 0.00003) at presentation. In addition, higher serum creatinine (p = 0.000006) and sLDH (p = 0.02) with lower BUN-to-serum creatinine ratio (p = 0.000007) were significantly associated with development of complications. A BUN-to-sCreatinine ratio ≤40 at admission was a sensitive and highly specific predictor of a complicated disease course/poor outcome. A BUN-to-serum Creatinine ratio can accurately identify children with D+HUS at risk for a complicated course and poor outcome. What is Known: • Oliguria is a predictor of poor long-term outcome in D+HUS What is New: • BUN-to-serum Creatinine ratio at admission is an entirely novel and accurate predictor of poor outcome and complicated clinical outcome in D+HUS • Early detection of the high risk group in D+HUS enabling early treatment and adequate monitoring.
The Alphabet Soup of HIV Reservoir Markers.
Sharaf, Radwa R; Li, Jonathan Z
2017-04-01
Despite the success of antiretroviral therapy in suppressing HIV, life-long therapy is required to avoid HIV reactivation from long-lived viral reservoirs. Currently, there is intense interest in searching for therapeutic interventions that can purge the viral reservoir to achieve complete remission in HIV patients off antiretroviral therapy. The evaluation of such interventions relies on our ability to accurately and precisely measure the true size of the viral reservoir. In this review, we assess the most commonly used HIV reservoir assays, as a clear understanding of the strengths and weaknesses of each is vital for the accurate interpretation of results and for the development of improved assays. The quantification of intracellular or plasma HIV RNA or DNA levels remains the most commonly used tests for the characterization of the viral reservoir. While cost-effective and high-throughput, these assays are not able to differentiate between replication-competent or defective fractions or quantify the number of infected cells. Viral outgrowth assays provide a lower bound for the fraction of cells that can produce infectious virus, but these assays are laborious, expensive and substantially underestimate the potential reservoir of replication-competent provirus. Newer assays are now available that seek to overcome some of these problems, including full-length proviral sequencing, inducible HIV RNA assays, ultrasensitive p24 assays and murine adoptive transfer techniques. The development and evaluation of strategies for HIV remission rely upon our ability to accurately and precisely quantify the size of the remaining viral reservoir. At this time, all current HIV reservoir assays have drawbacks such that combinations of assays are generally needed to gain a more comprehensive view of the viral reservoir. The development of novel, rapid, high-throughput assays that can sensitively quantify the levels of the replication-competent HIV reservoir is still needed.
Determining water content of fresh concrete by microwave reflection or transmission measurement.
DOT National Transportation Integrated Search
1987-01-01
In search of a rapid and accurate method for determining the water content of fresh concrete mixes, the microwave reflection and transmission properties of fresh concrete mixes were studied to determine the extent of correlation between each of these...
Semantic Features for Classifying Referring Search Terms
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, Chandler J.; Henry, Michael J.; McGrath, Liam R.
2012-05-11
When an internet user clicks on a result in a search engine, a request is submitted to the destination web server that includes a referrer field containing the search terms given by the user. Using this information, website owners can analyze the search terms leading to their websites to better understand their visitors needs. This work explores some of the features that can be used for classification-based analysis of such referring search terms. We present initial results for the example task of classifying HTTP requests countries of origin. A system that can accurately predict the country of origin from querymore » text may be a valuable complement to IP lookup methods which are susceptible to the obfuscation of dereferrers or proxies. We suggest that the addition of semantic features improves classifier performance in this example application. We begin by looking at related work and presenting our approach. After describing initial experiments and results, we discuss paths forward for this work.« less
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Knowns and unknowns in metabolomics identified by multidimensional NMR and hybrid MS/NMR methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bingol, Kerem; Brüschweiler, Rafael
Metabolomics continues to make rapid progress through the development of new and better methods and their applications to gain insight into the metabolism of a wide range of different biological systems from a systems biology perspective. Customization of NMR databases and search tools allows the faster and more accurate identification of known metabolites, whereas the identification of unknowns, without a need for extensive purification, requires new strategies to integrate NMR with mass spectrometry, cheminformatics, and computational methods. For some applications, the use of covalent and non-covalent attachments in the form of labeled tags or nanoparticles can significantly reduce the complexitymore » of these tasks.« less
Inverse lithography using sparse mask representations
NASA Astrophysics Data System (ADS)
Ionescu, Radu C.; Hurley, Paul; Apostol, Stefan
2015-03-01
We present a novel optimisation algorithm for inverse lithography, based on optimization of the mask derivative, a domain inherently sparse, and for rectilinear polygons, invertible. The method is first developed assuming a point light source, and then extended to general incoherent sources. What results is a fast algorithm, producing manufacturable masks (the search space is constrained to rectilinear polygons), and flexible (specific constraints such as minimal line widths can be imposed). One inherent trick is to treat polygons as continuous entities, thus making aerial image calculation extremely fast and accurate. Requirements for mask manufacturability can be integrated in the optimization without too much added complexity. We also explain how to extend the scheme for phase-changing mask optimization.
Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics.
Keich, Uri; Kertesz-Farkas, Attila; Noble, William Stafford
2015-08-07
Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications.
Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics
2016-01-01
Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications. PMID:26152888
Accurate visible speech synthesis based on concatenating variable length motion capture data.
Ma, Jiyong; Cole, Ron; Pellom, Bryan; Ward, Wayne; Wise, Barbara
2006-01-01
We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is desrcribed. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergarten through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis.
Predicting the Quasar Photometric Reshift with the Sloan Digital Sky Survey Filter System
NASA Astrophysics Data System (ADS)
Laubacher, Emily M.; York, Donald G.
1999-10-01
Photometric data were obtained for a set of known quasars (QSOs) in five bands with the Sloan Digital Sky Survey (SDSS) filter system for the purpose of testing the ability of the SDSS system to accurately predict the photometric redshift of QSOs. The initial plot of the SDSS photometric redshift versus the measured redshift shows a good relationship, but a lot of scatter. A literature search was conducted on a selected sampling of 49 QSOs, 26 with redshift z <= 0.5 and 23 with 0.5 < z < 2.6, to confirm their accurate identifications as QSOs with their advertised redshifts. This search revealed 10 rejected QSOs which were not QSOs but rather Seyfert galaxies or Narrow Line Objects. Additionally, 11 QSOs were either Broad Absorption Line Systems or had spectra that were in some way incomplete, and therefore, their QSO identification could not be confirmed. The revised plot, with the rejected and unconfirmed QSOs removed, gives an excellent straight line with very little scatter. Although these results are preliminary and for only a small sampling of QSOs, they show that further study of the relationship is warranted and that eventually the SDSS method may be used to accurately predict the photometric redshift of QSOs.
Kim, Bong Jun; Lee, Sungsoo
2018-04-01
The huge improvements in the speed of data transmission and the increasing amount of data available as the Internet has expanded have made it easy to obtain information about any disease. Since pneumothorax frequently occurs in young adolescents, patients often search the Internet for information on pneumothorax. This study analyzed an Internet community for exchanging information on pneumothorax, with an emphasis on the importance of accurate information and doctors' role in providing such information. This study assessed 599,178 visitors to the Internet community from June 2008 to April 2017. There was an average of 190 visitors, 2.2 posts, and 4.5 replies per day. A total of 6,513 posts were made, and 63.3% of them included questions about the disease. The visitors mostly searched for terms such as 'pneumothorax,' 'recurrent pneumothorax,' 'pneumothorax operation,' and 'obtaining a medical certification of having been diagnosed with pneumothorax.' However, 22% of the pneumothorax-related posts by visitors contained inaccurate information. Internet communities can be an important source of information. However, incorrect information about a disease can be harmful for patients. We, as doctors, should try to provide more in-depth information about diseases to patients and to disseminate accurate information about diseases in Internet communities.
Boylan, S; Louie, J C Y; Gill, T P
2012-07-01
Strong evidence linking poor diet and lack of physical activity to risk of obesity and related chronic disease has supported the development and promotion of guidelines to improve population health. Still, obesity continues to escalate as a major health concern, and so the impact of weight-related guidelines on behaviour is unclear. The aim of this review was to examine consumer response to weight-related guidelines. A systematic literature search was performed using Medline, PsycInfo, ProQuest Central and additional searches using Google and reference lists. Of the 1,765 articles identified, 46 relevant titles were included. Most studies examined attitudes towards content, source, tailoring and comprehension of dietary guidelines. Many respondents reported that guidelines were confusing, and that simple, clear, specific, realistic, and in some cases, tailored guidelines are required. Recognition of guidelines did not signify understanding nor did perceived credibility of a source guarantee utilization of guidelines. There was a lack of studies assessing: the impact of guidelines on behaviour; responses to physical activity guidelines; responses among males and studies undertaken in developing countries. Further research is needed, in particular regarding responses to physical activity guidelines and guidelines in different populations. Communication professionals should assist health professionals in the development of accurate and effective weight-related guidelines. © 2012 The Authors. obesity reviews © 2012 International Association for the Study of Obesity.
A Copmarative Review of Electronic Prescription Systems: Lessons Learned from Developed Countries
Samadbeik, Mahnaz; Ahmadi, Maryam; Sadoughi, Farahnaz; Garavand, Ali
2017-01-01
This review study aimed to compare the electronic prescription systems in five selected countries (Denmark, Finland, Sweden, England, and the United States). Compared developed countries were selected by the identified selection process from the countries that have electronic prescription systems. Required data were collected by searching the valid databases, most widely used search engines, and visiting websites related to the national electronic prescription system of each country and also sending E-mails to the related organizations using specifically designed data collection forms. The findings showed that the electronic prescription system was used at the national, state, local, and area levels in the studied countries and covered the whole prescription process or part of it. There were capabilities of creating electronic prescription, decision support, electronically transmitting prescriptions from prescriber systems to the pharmacies, retrieving the electronic prescription at the pharmacy, electronic refilling prescriptions in all studied countries. The patient, prescriber, and dispenser were main human actors, as well as the prescribing and dispensing providers were main system actors of the Electronic Prescription Service. The selected countries have accurate, regular, and systematic plans to use electronic prescription system, and health ministry of these countries was responsible for coordinating and leading the electronic health. It is suggested to use experiences and programs of the leading countries to design and develop the electronic prescription systems. PMID:28331859
Thurman, E Michael; Ferrer, Imma; Zavitsanos, Paul; Zweigenbaum, Jerry A
2013-09-15
Imidacloprid is a potent and widely used insecticide on vegetable crops, such as onion (Allium cepa L.). Because of possible toxicity to beneficial insects, imidacloprid and several metabolites have raised safety concerns for pollenating insects, such as honey bees. Thus, imidacloprid metabolites continue to be an important subject for new methods that better understand its dissipation and fate in plants, such as onions. One month after a single addition of imidacloprid to soil containing onion plants, imidacloprid and its metabolites were extracted from pulverized onion with a methanol/water-buffer mixture and analyzed by liquid chromatography/quadrupole time-of-flight mass spectrometry (LC/QTOF-MS) using a labeled imidacloprid internal standard and tandem mass spectrometric (MS/MS) analysis. Accurate mass tools were developed and applied to detect seven new metabolites of imidacloprid with the goal to better understand its fate in onion. The accurate mass tools include: database searching, diagnostic ions, chlorine mass filters, Mass Profiler software, and manual use of metabolic analogy. The new metabolites discovered included an amine reduction product (m/z 226.0854), and its methylated analogue (m/z 240.1010), and five other metabolites, all of unknown toxicity to insects. The accurate mass tools were combined with LC/QTOF-MS and were able to detect both known and new metabolites of imidacloprid using fragmentation studies of both parent and labeled standards. New metabolites and their structures were inferred from these MS/MS studies with accurate mass, which makes it possible to better understand imidacloprid metabolism in onion as well as new metabolite targets for toxicity studies. Copyright © 2013 John Wiley & Sons, Ltd.
Digital dream analysis: a revised method.
Bulkeley, Kelly
2014-10-01
This article demonstrates the use of a digital word search method designed to provide greater accuracy, objectivity, and speed in the study of dreams. A revised template of 40 word search categories, built into the website of the Sleep and Dream Database (SDDb), is applied to four "classic" sets of dreams: The male and female "Norm" dreams of Hall and Van de Castle (1966), the "Engine Man" dreams discussed by Hobson (1988), and the "Barb Sanders Baseline 250" dreams examined by Domhoff (2003). A word search analysis of these original dream reports shows that a digital approach can accurately identify many of the same distinctive patterns of content found by previous investigators using much more laborious and time-consuming methods. The results of this study emphasize the compatibility of word search technologies with traditional approaches to dream content analysis. Copyright © 2014 Elsevier Inc. All rights reserved.
Sea Ice Prediction Has Easy and Difficult Years
NASA Technical Reports Server (NTRS)
Hamilton, Lawrence C.; Bitz, Cecilia M.; Blanchard-Wrigglesworth, Edward; Cutler, Matthew; Kay, Jennifer; Meier, Walter N.; Stroeve, Julienne; Wiggins, Helen
2014-01-01
Arctic sea ice follows an annual cycle, reaching its low point in September each year. The extent of sea ice remaining at this low point has been trending downwards for decades as the Arctic warms. Around the long-term downward trend, however, there is significant variation in the minimum extent from one year to the next. Accurate forecasts of yearly conditions would have great value to Arctic residents, shipping companies, and other stakeholders and are the subject of much current research. Since 2008 the Sea Ice Outlook (SIO) (http://www.arcus.org/search-program/seaiceoutlook) organized by the Study of Environmental Arctic Change (SEARCH) (http://www.arcus.org/search-program) has invited predictions of the September Arctic sea ice minimum extent, which are contributed from the Arctic research community. Individual predictions, based on a variety of approaches, are solicited in three cycles each year in early June, July, and August. (SEARCH 2013).
Testing take-the-best in new and changing environments.
Lee, Michael D; Blanco, Gabrielle; Bo, Nikole
2017-08-01
Take-the-best is a decision-making strategy that chooses between alternatives, by searching the cues representing the alternatives in order of cue validity, and choosing the alternative with the first discriminating cue. Theoretical support for take-the-best comes from the "fast and frugal" approach to modeling cognition, which assumes decision-making strategies need to be fast to cope with a competitive world, and be simple to be robust to uncertainty and environmental change. We contribute to the empirical evaluation of take-the-best in two ways. First, we generate four new environments-involving bridge lengths, hamburger prices, theme park attendances, and US university rankings-supplementing the relatively limited number of naturally cue-based environments previously considered. We find that take-the-best is as accurate as rival decision strategies that use all of the available cues. Secondly, we develop 19 new data sets characterizing the change in cities and their populations in four countries. We find that take-the-best maintains its accuracy and limited search as the environments change, even if cue validities learned in one environment are used to make decisions in another. Once again, we find that take-the-best is as accurate as rival strategies that use all of the cues. We conclude that these new evaluations support the theoretical claims of the accuracy, frugality, and robustness for take-the-best, and that the new data sets provide a valuable resource for the more general study of the relationship between effective decision-making strategies and the environments in which they operate.
Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.
2017-01-01
In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790
Molecular biomarkers in idiopathic pulmonary fibrosis
Ley, Brett; Brown, Kevin K.
2014-01-01
Molecular biomarkers are highly desired in idiopathic pulmonary fibrosis (IPF), where they hold the potential to elucidate underlying disease mechanisms, accelerated drug development, and advance clinical management. Currently, there are no molecular biomarkers in widespread clinical use for IPF, and the search for potential markers remains in its infancy. Proposed core mechanisms in the pathogenesis of IPF for which candidate markers have been offered include alveolar epithelial cell dysfunction, immune dysregulation, and fibrogenesis. Useful markers reflect important pathological pathways, are practically and accurately measured, have undergone extensive validation, and are an improvement upon the current approach for their intended use. The successful development of useful molecular biomarkers is a central challenge for the future of translational research in IPF and will require collaborative efforts among those parties invested in advancing the care of patients with IPF. PMID:25260757
Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G
2017-11-03
In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.
Peptide Mass Fingerprinting of Egg White Proteins
ERIC Educational Resources Information Center
Alty, Lisa T.; LaRiviere, Frederick J.
2016-01-01
Use of advanced mass spectrometry techniques in the undergraduate setting has burgeoned in the past decade. However, relatively few undergraduate experiments examine the proteomics tools of protein digestion, peptide accurate mass determination, and database searching, also known as peptide mass fingerprinting. In this experiment, biochemistry…
Measurement Standards " Commercial Vehicle Enforcement
of Measurement Standards & Commercial Vehicle Enforcement Search DOT&PF State of Alaska DOT &PF> MS/CVE Home Director's Office Measurement Standards MS Home Chief Metrology Lab Commercial Admin Planning Contacts Welcome to MS/CVE Ensuring Accurate Trade Measurements and Enforcing Commercial
Eyewitness identification accuracy and response latency: the unruly 10-12-second rule.
Weber, Nathan; Brewer, Neil; Wells, Gary L; Semmler, Carolyn; Keast, Amber
2004-09-01
Data are reported from 3,213 research eyewitnesses confirming that accurate eyewitness identifications from lineups are made faster than are inaccurate identifications. However, consistent with predictions from the recognition and search literatures, the authors did not find support for the "10-12-s rule" in which lineup identifications faster than 10-12 s maximally discriminate between accurate and inaccurate identifications (D. Dunning & S. Perretta, 2002). Instead, the time frame that proved most discriminating was highly variable across experiments, ranging from 5 s to 29 s, and the maximally discriminating time was often unimpressive in its ability to sort accurate from inaccurate identifications. The authors suggest several factors that are likely to moderate the 10-12-s rule. (c) 2004 APA, all rights reserved.
Systematic review and consensus guidelines for environmental sampling of Burkholderia pseudomallei.
Limmathurotsakul, Direk; Dance, David A B; Wuthiekanun, Vanaporn; Kaestli, Mirjam; Mayo, Mark; Warner, Jeffrey; Wagner, David M; Tuanyok, Apichai; Wertheim, Heiman; Yoke Cheng, Tan; Mukhopadhyay, Chiranjay; Puthucheary, Savithiri; Day, Nicholas P J; Steinmetz, Ivo; Currie, Bart J; Peacock, Sharon J
2013-01-01
Burkholderia pseudomallei, a Tier 1 Select Agent and the cause of melioidosis, is a Gram-negative bacillus present in the environment in many tropical countries. Defining the global pattern of B. pseudomallei distribution underpins efforts to prevent infection, and is dependent upon robust environmental sampling methodology. Our objective was to review the literature on the detection of environmental B. pseudomallei, update the risk map for melioidosis, and propose international consensus guidelines for soil sampling. An international working party (Detection of Environmental Burkholderia pseudomallei Working Party (DEBWorP)) was formed during the VIth World Melioidosis Congress in 2010. PubMed (January 1912 to December 2011) was searched using the following MeSH terms: pseudomallei or melioidosis. Bibliographies were hand-searched for secondary references. The reported geographical distribution of B. pseudomallei in the environment was mapped and categorized as definite, probable, or possible. The methodology used for detecting environmental B. pseudomallei was extracted and collated. We found that global coverage was patchy, with a lack of studies in many areas where melioidosis is suspected to occur. The sampling strategies and bacterial identification methods used were highly variable, and not all were robust. We developed consensus guidelines with the goals of reducing the probability of false-negative results, and the provision of affordable and 'low-tech' methodology that is applicable in both developed and developing countries. The proposed consensus guidelines provide the basis for the development of an accurate and comprehensive global map of environmental B. pseudomallei.
Meursinge Reynders, Reint; Ladu, Luisa; Ronchi, Laura; Di Girolamo, Nicola; de Lange, Jan; Roberts, Nia; Plüddemann, Annette
2015-04-02
Hitting a dental root during the insertion of orthodontic mini-implants (OMIs) is a common adverse effect of this intervention. This condition can permanently damage these structures and can cause implant instability. Increased torque levels (index test) recorded during the insertion of OMIs may provide a more accurate and immediate diagnosis of implant-root contact (target condition) than radiographic imaging (reference standard). An accurate index test could reduce or eliminate X-ray exposure. These issues, the common use of OMIs, the high prevalence of the target condition, and because most OMIs are placed between roots warrant a systematic review. We will assess 1) the diagnostic accuracy and the adverse effects of the index test, 2) whether OMIs with root contact have higher insertion torque values than those without, and 3) whether intermediate torque values have clinical diagnostic utility. The Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) 2015 statement was used as a the guideline for reporting this protocol. Inserting implants deliberately into dental roots of human participants would not be approved by ethical review boards and adverse effects of interventions are generally underreported. We will therefore apply broad spectrum eligibility criteria, which will include clinical, animal and cadaver models. Not including these models could slow down knowledge translation. Both randomized and non-randomized research studies will be included. Comparisons of interest and subgroups are pre-specified. We will conduct searches in MEDLINE and more than 40 other electronic databases. We will search the grey literature and reference lists and hand-search ten journals. All methodological procedures will be conducted by three reviewers. Study selection, data extraction and analyses, and protocols for contacting authors and resolving conflicts between reviewers are described. Designed specific risk of bias tools will be tailored to the research question. Different research models will be analysed separately. Parameters for exploring statistical heterogeneity and conducting meta-analyses are pre-specified. The quality of evidence for outcomes will be assessed through the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. The findings of this systematic review will be useful for patients, clinicians, researchers, guideline developers, policymakers, and surgical companies.
Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou
2012-01-01
Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise.
Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou
2012-01-01
Background Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. Results A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. Conclusions This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise. PMID:23028708
Winsor, Geoffrey L; Van Rossum, Thea; Lo, Raymond; Khaira, Bhavjinder; Whiteside, Matthew D; Hancock, Robert E W; Brinkman, Fiona S L
2009-01-01
Pseudomonas aeruginosa is a well-studied opportunistic pathogen that is particularly known for its intrinsic antimicrobial resistance, diverse metabolic capacity, and its ability to cause life threatening infections in cystic fibrosis patients. The Pseudomonas Genome Database (http://www.pseudomonas.com) was originally developed as a resource for peer-reviewed, continually updated annotation for the Pseudomonas aeruginosa PAO1 reference strain genome. In order to facilitate cross-strain and cross-species genome comparisons with other Pseudomonas species of importance, we have now expanded the database capabilities to include all Pseudomonas species, and have developed or incorporated methods to facilitate high quality comparative genomics. The database contains robust assessment of orthologs, a novel ortholog clustering method, and incorporates five views of the data at the sequence and annotation levels (Gbrowse, Mauve and custom views) to facilitate genome comparisons. A choice of simple and more flexible user-friendly Boolean search features allows researchers to search and compare annotations or sequences within or between genomes. Other features include more accurate protein subcellular localization predictions and a user-friendly, Boolean searchable log file of updates for the reference strain PAO1. This database aims to continue to provide a high quality, annotated genome resource for the research community and is available under an open source license.
Development of an oxygen saturation measuring system by using near-infrared spectroscopy
NASA Astrophysics Data System (ADS)
Kono, K.; Nakamachi, E.; Morita, Y.
2017-08-01
Recently, the hypoxia imaging has been recognized as the advanced technique to detect cancers because of a strong relationship with the biological characterization of cancer. In previous studies, hypoxia imaging systems for endoscopic diagnosis have been developed. However, these imaging technologies using the visible light can observe only blood vessels in gastric mucous membrane. Therefore, they could not detect scirrhous gastric cancer which accounts for 10% of all gastric cancers and spreads rapidly into submucous membrane. To overcome this problem, we developed a measuring system of blood oxygen saturation in submucous membrane by using near-infrared (NIR) spectroscopy. NIR, which has high permeability for bio-tissues and high absorbency for hemoglobin, can image and observe blood vessels in submucous membrane. NIR system with LED lights and a CCD camera module was developed to image blood vessels. We measured blood oxygen saturation using the optical density ratio (ODR) of two wavelengths, based on Lambert-Beer law. To image blood vessel clearly and measure blood oxygen saturation accurately, we searched two optimum wavelengths by using a multilayer human gastric-like phantom which has same optical properties as human gastric one. By using Monte Carlo simulation of light propagation, we derived the relationship between the ODR and blood oxygen saturation and elucidated the influence of blood vessel depth on measuring blood oxygen saturation. The oxygen saturation measuring methodology was validated with experiments using our NIR system. Finally, it was confirmed that our system can detect oxygen saturation in various depth blood vessels accurately.
A vibration correction method for free-fall absolute gravimeters
NASA Astrophysics Data System (ADS)
Qian, J.; Wang, G.; Wu, K.; Wang, L. J.
2018-02-01
An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.
Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.
Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu
2017-07-01
In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.
Computational materials design of crystalline solids.
Butler, Keith T; Frost, Jarvist M; Skelton, Jonathan M; Svane, Katrine L; Walsh, Aron
2016-11-07
The modelling of materials properties and processes from first principles is becoming sufficiently accurate as to facilitate the design and testing of new systems in silico. Computational materials science is both valuable and increasingly necessary for developing novel functional materials and composites that meet the requirements of next-generation technology. A range of simulation techniques are being developed and applied to problems related to materials for energy generation, storage and conversion including solar cells, nuclear reactors, batteries, fuel cells, and catalytic systems. Such techniques may combine crystal-structure prediction (global optimisation), data mining (materials informatics) and high-throughput screening with elements of machine learning. We explore the development process associated with computational materials design, from setting the requirements and descriptors to the development and testing of new materials. As a case study, we critically review progress in the fields of thermoelectrics and photovoltaics, including the simulation of lattice thermal conductivity and the search for Pb-free hybrid halide perovskites. Finally, a number of universal chemical-design principles are advanced.
Zhao, Xinjie; Zeng, Zhongda; Chen, Aiming; Lu, Xin; Zhao, Chunxia; Hu, Chunxiu; Zhou, Lina; Liu, Xinyu; Wang, Xiaolin; Hou, Xiaoli; Ye, Yaorui; Xu, Guowang
2018-05-29
Identification of the metabolites is an essential step in metabolomics study to interpret regulatory mechanism of pathological and physiological processes. However, it is still a big headache in LC-MSn-based studies because of the complexity of mass spectrometry, chemical diversity of metabolites, and deficiency of standards database. In this work, a comprehensive strategy is developed for accurate and batch metabolite identification in non-targeted metabolomics studies. First, a well defined procedure was applied to generate reliable and standard LC-MS2 data including tR, MS1 and MS2 information at a standard operational procedure (SOP). An in-house database including about 2000 metabolites was constructed and used to identify the metabolites in non-targeted metabolic profiling by retention time calibration using internal standards, precursor ion alignment and ion fusion, auto-MS2 information extraction and selection, and database batch searching and scoring. As an application example, a pooled serum sample was analyzed to deliver the strategy, 202 metabolites were identified in the positive ion mode. It shows our strategy is useful for LC-MSn-based non-targeted metabolomics study.
A new protocol to accurately determine microtubule lattice seam location
Zhang, Rui; Nogales, Eva
2015-09-28
Microtubules (MTs) are cylindrical polymers of αβ-tubulin that display pseudo-helical symmetry due to the presence of a lattice seam of heterologous lateral contacts. The structural similarity between α- and β-tubulin makes it difficult to computationally distinguish them in the noisy cryo-EM images, unless a marker protein for the tubulin dimer, such as kinesin motor domain, is present. We have developed a new data processing protocol that can accurately determine αβ-tubulin register and seam location for MT segments. Our strategy can handle difficult situations, where the marker protein is relatively small or the decoration of marker protein is sparse. Using thismore » new seam-search protocol, combined with movie processing for data from a direct electron detection camera, we were able to determine the cryo-EM structures of MT at 3.5. Å resolution in different functional states. The successful distinction of α- and β-tubulin allowed us to visualize the nucleotide state at the E-site and the configuration of lateral contacts at the seam.« less
Characterising dark matter searches at colliders and direct detection experiments: Vector mediators
Buchmueller, Oliver; Dolan, Matthew J.; Malik, Sarah A.; ...
2015-01-09
We introduce a Minimal Simplified Dark Matter (MSDM) framework to quantitatively characterise dark matter (DM) searches at the LHC. We study two MSDM models where the DM is a Dirac fermion which interacts with a vector and axial-vector mediator. The models are characterised by four parameters: m DM, M med , g DM and g q, the DM and mediator masses, and the mediator couplings to DM and quarks respectively. The MSDM models accurately capture the full event kinematics, and the dependence on all masses and couplings can be systematically studied. The interpretation of mono-jet searches in this framework canmore » be used to establish an equal-footing comparison with direct detection experiments. For theories with a vector mediator, LHC mono-jet searches possess better sensitivity than direct detection searches for light DM masses (≲5 GeV). For axial-vector mediators, LHC and direct detection searches generally probe orthogonal directions in the parameter space. We explore the projected limits of these searches from the ultimate reach of the LHC and multi-ton xenon direct detection experiments, and find that the complementarity of the searches remains. In conclusion, we provide a comparison of limits in the MSDM and effective field theory (EFT) frameworks to highlight the deficiencies of the EFT framework, particularly when exploring the complementarity of mono-jet and direct detection searches.« less
Analysis of Propagation Plans in NSF-Funded Education Development Projects
NASA Astrophysics Data System (ADS)
Stanford, Courtney; Cole, Renee; Froyd, Jeff; Henderson, Charles; Friedrichsen, Debra; Khatri, Raina
2017-08-01
Increasing adoption and adaptation of promising instructional strategies and materials has been identified as a critical component needed to improve science, technology, engineering, and mathematics (STEM) education. This paper examines typical propagation practices and resulting outcomes of proposals written by developers of educational innovations. These proposals were analyzed using the Designing for Sustained Adoption Assessment Instrument (DSAAI), an instrument developed to evaluate propagation plans, and the results used to predict the likelihood that a successful project would result in adoption by others. We found that few education developers propose strong propagation plans. Afterwards, a follow-up analysis was conducted to see which propagation strategies developers actually used to help develop, disseminate, and support their innovations. A web search and interviews with principal investigators were used to determine the degree to which propagation plans were actually implemented and to estimate adoption of the innovations. In this study, we analyzed 71 education development proposals funded by the National Science Foundation and predicted that 80% would be unsuccessful in propagating their innovations. Follow-up data collection with a subset of these suggests that the predictions were reasonably accurate.
Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil
2014-03-01
Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI.
Gravitational waves from rotating neutron stars and compact binary systems
NASA Astrophysics Data System (ADS)
Wade, Leslie E., IV
It is widely anticipated that the first direct detections of gravitational waves will be made by advanced gravitational-wave detectors, such as the two Laser Interferometer Gravitational-wave Observatories (LIGO) and the Virgo interferometer. In preparation for the advanced detector era, I have worked on both detection and post-detection efforts involving two gravitational wave sources: isolated rotating neutron stars (NSs) and compact binary coalescences (CBCs). My dissertation includes three main research projects: 1) a population synthesis study assessing the detectability of isolated NSs, 2) a CBC search for intermediate-mass black-hole binaries (IMBHBs), and 3) new methods for directly measuring the neutron-star (NS) equation of state (EOS). Direct detections of gravitational waves will enrich our current astrophysical knowledge. One such contribution will be through population synthesis of isolated NSs. My collaborators and I show that advanced gravitational-wave detectors can be used to constrain the properties of the Galactic NS population. Gravitational wave detections can also shine light on a currently mysterious astrophysical object: intermediate mass black holes. In developing the IMBHB search, we performed a mock data challenge where signals with total masses up to a few hundred solar masses were injected into recolored data from LIGO's sixth science run. Since this is the first time a matched filter search has been developed to search for IMBHBs, I discuss what was learned during the mock data challenge and how we plan to improve the search going forward. The final aspect of my dissertation focuses on important post-detection science. I present results for a new method of directly measuring the NS EOS. This is done by estimating the parameters of a 4-piece polytropic EOS model that matches theoretical EOS candidates to a few percent. We show that advanced detectors will be capable of measuring the NS radius to within a kilometer for stars with canonical masses. However, this can only be accomplished with binary NS waveform models that are accurate to the rich EOS physics that happens near merger. We show that the waveforms typically used to model binary NS systems result in unavoidable systematic error that can significantly bias the estimation of the NS EOS.
Willard, Scott D; Nguyen, Mike M
2013-01-01
To evaluate the utility of using Internet search trends data to estimate kidney stone occurrence and understand the priorities of patients with kidney stones. Internet search trends data represent a unique resource for monitoring population self-reported illness and health information-seeking behavior. The Google Insights for Search analysis tool was used to study searches related to kidney stones, with each search term returning a search volume index (SVI) according to the search frequency relative to the total search volume. SVIs for the term, "kidney stones," were compiled by location and time parameters and compared with the published weather and stone prevalence data. Linear regression analysis was performed to determine the association of the search interest score with known epidemiologic variations in kidney stone disease, including latitude, temperature, season, and state. The frequency of the related search terms was categorized by theme and qualitatively analyzed. The SVI correlated significantly with established kidney stone epidemiologic predictors. The SVI correlated with the state latitude (R-squared=0.25; P<.001), the state mean annual temperature (R-squared=0.24; P<.001), and state combined sex prevalence (R-squared=0.25; P<.001). Female prevalence correlated more strongly than did male prevalence (R-squared=0.37; P<.001, and R-squared=0.17; P=.003, respectively). The national SVI correlated strongly with the average U.S. temperature by month (R-squared=0.54; P=.007). The search term ranking suggested that Internet users are most interested in the diagnosis, followed by etiology, infections, and treatment. Geographic and temporal variability in kidney stone disease appear to be accurately reflected in Internet search trends data. Internet search trends data might have broader applications for epidemiologic and urologic research. Copyright © 2013 Elsevier Inc. All rights reserved.
Failures of Perception in the Low-Prevalence Effect: Evidence From Active and Passive Visual Search
Hout, Michael C.; Walenchok, Stephen C.; Goldinger, Stephen D.; Wolfe, Jeremy M.
2017-01-01
In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%–34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, W; Yang, J; Beadle, B
Purpose: Endoscopic examinations are routine procedures for head-and-neck cancer patients. Our goal is to develop a method to map the recorded video to CT, providing valuable information for radiotherapy treatment planning and toxicity analysis. Methods: We map video frames to CT via virtual endoscopic images rendered at the real endoscope’s CT-space coordinates. We developed two complementary methods to find these coordinates by maximizing real-to-virtual image similarity:(1)Endoscope Tracking: moves the virtual endoscope frame-by-frame until the desired frame is reached. Utilizes prior knowledge of endoscope coordinates, but sensitive to local optima. (2)Location Search: moves the virtual endoscope along possible paths through themore » volume to find the desired frame. More robust, but more computationally expensive. We tested these methods on clay phantoms with embedded markers for point mapping and protruding bolus material for contour mapping, and we assessed them qualitatively on three patient exams. For mapped points we calculated 3D-distance errors, and for mapped contours we calculated mean absolute distances (MAD) from CT contours. Results: In phantoms, Endoscope Tracking had average point error=0.66±0.50cm and average bolus MAD=0.74±0.37cm for the first 80% of each video. After that the virtual endoscope got lost, increasing these values to 4.73±1.69cm and 4.06±0.30cm. Location Search had point error=0.49±0.44cm and MAD=0.53±0.28cm. Point errors were larger where the endoscope viewed the surface at shallow angles<10 degrees (1.38±0.62cm and 1.22±0.69cm for Endoscope Tracking and Location Search, respectively). In patients, Endoscope Tracking did not make it past the nasal cavity. However, Location Search found coordinates near the correct location for 70% of test frames. Its performance was best near the epiglottis and in the nasal cavity. Conclusion: Location Search is a robust and accurate technique to map endoscopic video to CT. Endoscope Tracking is sensitive to erratic camera motion and local optima, but could be used in conjunction with anchor points found using Location Search.« less
G-Bean: an ontology-graph based web tool for biomedical literature retrieval
2014-01-01
Background Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. Methods G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Results Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. Conclusions G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user. PMID:25474588
G-Bean: an ontology-graph based web tool for biomedical literature retrieval.
Wang, James Z; Zhang, Yuanyuan; Dong, Liang; Li, Lin; Srimani, Pradip K; Yu, Philip S
2014-01-01
Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user.
A Framework for Quality Assurance in Child Welfare.
ERIC Educational Resources Information Center
O'Brien, Mary; Watson, Peter
In their search for new ways to assess their agencies' success in working with children and families, child welfare administrators and senior managers are increasingly seeking regular and reliable sources of information that help them evaluate agency performance, make ongoing decisions, and provide an accurate picture for agency staff and external…
In Search of a Really "Next Generation" Catalog
ERIC Educational Resources Information Center
Singer, Ross
2008-01-01
Ever since North Carolina State University Libraries launched their Endeca-based OPAC replacement in the beginning of 2006, the library world has been completely obsessed with ditching their old, tired catalog interfaces (and with good reason) for the greener pastures of more sophisticated indexing, more accurate relevance ranking, dust jackets,…
Weather | National Oceanic and Atmospheric Administration
Jump to Content Enter Search Terms Weather Climate Oceans & Coasts Fisheries Satellites - NWS provides each person in the U.S. with timely and accurate basic weather, water, and climate climate events, cause an average of approximately 650 deaths and $15 billion in damage per year and are
Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm
NASA Astrophysics Data System (ADS)
Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi
2014-01-01
This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.
Internal sense of direction and landmark use in pigeons (Columba livia).
Sutton, Jennifer E; Shettleworth, Sara J
2005-08-01
The relative importance of an internal sense of direction based on inertial cues and landmark piloting for small-scale navigation by White King pigeons (Columba livia) was investigated in an arena search task. Two groups of pigeons differed in whether they had access to visual cues outside the arena. In Experiment 1, pigeons were given experience with 2 different entrances and all pigeons transferred accurate searching to novel entrances. Explicit disorientation before entering did not affect accuracy. In Experiments 2-4, landmarks and inertial cues were put in conflict or tested 1 at a time. Pigeons tended to follow the landmarks in a conflict situation but could use an internal sense of direction to search when landmarks were unavailable. Copyright 2005 APA, all rights reserved.
Broecker, Sebastian; Herre, Sieglinde; Wüst, Bernhard; Zweigenbaum, Jerry; Pragst, Fritz
2011-04-01
A library of collision-induced dissociation (CID) accurate mass spectra has been developed for efficient use of liquid chromatography in combination with hybrid quadrupole time-of-flight mass spectrometry (LC-QTOF-MS) as a tool in systematic toxicological analysis. The mass spectra (Δm < 3 ppm) of more than 2,500 illegal and therapeutic drugs, pesticides, alkaloids, other toxic chemicals and metabolites were measured, by use of an Agilent 6530 instrument, by flow-injection of 1 ng of the pure substances in aqueous ammonium formate-formic acid-methanol, with positive and negative electrospray-ionization (ESI), selection of the protonated or deprotonated molecules [M+H](+) or [M-H](-) by the quadrupole, and collision induced dissociation (CID) with nitrogen as collision gas at CID energies of 10, 20, and 40 eV. The fragment mass spectra were controlled for structural plausibility, corrected by recalculation to the theoretical fragment masses and added to a database of accurate mass data and molecular formulas of more than 7,500 toxicologically relevant substances to form the "database and library of toxic compounds". For practical evaluation, blood and urine samples were spiked with a mixture of 33 drugs at seven concentrations between 0.5 and 500 ng mL(-1), prepared by dichloromethane extraction or protein precipitation, and analyzed by LC-QTOF-MS in data-dependent acquisition mode. Unambiguous identification by library search was possible for typical basic drugs down to 0.5-2 ng mL(-1) and for benzodiazepines down to 2-20 ng mL(-1). The efficiency of the method was also demonstrated by re-analysis of venous blood samples from 50 death cases and comparison with previous results. In conclusion, LC-QTOF-MS in data-dependent acquisition mode combined with an accurate mass database and CID spectra library seemed to be one of the most efficient tools for systematic toxicological analysis.
Kang, Chaogui; Liu, Yu; Guo, Diansheng; Qin, Kun
2015-01-01
We generalized the recently introduced “radiation model”, as an analog to the generalization of the classic “gravity model”, to consolidate its nature of universality for modeling diverse mobility systems. By imposing the appropriate scaling exponent λ, normalization factor κ and system constraints including searching direction and trip OD constraint, the generalized radiation model accurately captures real human movements in various scenarios and spatial scales, including two different countries and four different cities. Our analytical results also indicated that the generalized radiation model outperformed alternative mobility models in various empirical analyses. PMID:26600153
Skull base lesions: extracranial origins.
Mosier, Kristine M
2013-10-01
A number of extracranial anatomical sites, including the nasopharynx, paranasal sinuses, and masticator space, may give rise to lesions involving the skull base. Implicit in the nature of an invasive lesion, the majority of these lesions are malignant. Accordingly, for optimal patient outcomes and treatment planning, it is imperative to include a search pattern for extracranial sites and to assess accurately the character and extent of these diverse lesions. Of particular importance to radiologists are lesions arising from each extracranial site, the search patterns, and relevant information important to convey to the referring clinician. Copyright © 2013 Elsevier Inc. All rights reserved.
Kang, Chaogui; Liu, Yu; Guo, Diansheng; Qin, Kun
2015-01-01
We generalized the recently introduced "radiation model", as an analog to the generalization of the classic "gravity model", to consolidate its nature of universality for modeling diverse mobility systems. By imposing the appropriate scaling exponent λ, normalization factor κ and system constraints including searching direction and trip OD constraint, the generalized radiation model accurately captures real human movements in various scenarios and spatial scales, including two different countries and four different cities. Our analytical results also indicated that the generalized radiation model outperformed alternative mobility models in various empirical analyses.
Screening methods for post-stroke visual impairment: a systematic review.
Hanna, Kerry Louise; Hepworth, Lauren Rachel; Rowe, Fiona
2017-12-01
To provide a systematic overview of the various tools available to screen for post-stroke visual impairment. A review of the literature was conducted including randomised controlled trials, controlled trials, cohort studies, observational studies, systematic reviews and retrospective medical note reviews. All languages were included and translation was obtained. Participants included adults ≥18 years old diagnosed with a visual impairment as a direct cause of a stroke. We searched a broad range of scholarly online resources and hand-searched articles registers of published, unpublished and on-going trials. Search terms included a variety of MESH terms and alternatives in relation to stroke and visual conditions. Study selection was performed by two authors independently. The quality of the evidence and risk of bias were assessed using the STROBE, GRACE and PRISMA statements. A total of 25 articles (n = 2924) were included in this review. Articles appraised reported on tools screening solely for visual impairments or for general post-stroke disabilities inclusive of vision. The majority of identified tools screen for visual perception including visual neglect (VN), with few screening for visual acuity (VA), visual field (VF) loss or ocular motility (OM) defects. Six articles reported on nine screening tools which combined visual screening assessment alongside screening for general stroke disabilities. Of these, three included screening for VA; three screened for VF loss; three screened for OM defects and all screened for VN. Two tools screened for all visual impairments. A further 19 articles were found which reported on individual vision screening tests in stroke populations; two for VF loss; 11 for VN and six for other visual perceptual defects. Most tools cannot accurately account for those with aphasia or communicative deficits, which are common problems following a stroke. There is currently no standardised visual screening tool which can accurately assess all potential post-stroke visual impairments. The current tools screen for only a number of potential stroke-related impairments, which means many visual defects may be missed. The sensitivity of those which screen for all impairments is significantly lowered when patients are unable to report their visual symptoms. Future research is required to develop a tool capable of assessing stroke patients which encompasses all potential visual deficits and can also be easily performed by both the patients and administered by health care professionals in order to ensure all stroke survivors with visual impairment are accurately identified and managed. Implications for Rehabilitation Over 65% of stroke survivors will suffer from a visual impairment, whereas 45% of stroke units do not assess vision. Visual impairment significantly reduces the quality of life, such as being unable to return to work, driving and depression. This review outlines the available screening methods to accurately identify stroke survivors with visual impairments. Identifying visual impairment after stroke can aid general rehabilitation and thus, improve the quality of life for these patients.
Alves, Gelio; Yu, Yi-Kuo
2016-09-01
There is a growing trend for biomedical researchers to extract evidence and draw conclusions from mass spectrometry based proteomics experiments, the cornerstone of which is peptide identification. Inaccurate assignments of peptide identification confidence thus may have far-reaching and adverse consequences. Although some peptide identification methods report accurate statistics, they have been limited to certain types of scoring function. The extreme value statistics based method, while more general in the scoring functions it allows, demands accurate parameter estimates and requires, at least in its original design, excessive computational resources. Improving the parameter estimate accuracy and reducing the computational cost for this method has two advantages: it provides another feasible route to accurate significance assessment, and it could provide reliable statistics for scoring functions yet to be developed. We have formulated and implemented an efficient algorithm for calculating the extreme value statistics for peptide identification applicable to various scoring functions, bypassing the need for searching large random databases. The source code, implemented in C ++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit yyu@ncbi.nlm.nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
Ochiai, Nobuo; Mitsui, Kazuhisa; Sasamoto, Kikuo; Yoshimura, Yuta; David, Frank; Sandra, Pat
2014-09-05
A method is developed for identification of sulfur compounds in tobacco smoke extract. The method is based on large volume injection (LVI) of 10μL of tobacco smoke extract followed by selectable one-dimensional ((1)D) or two-dimensional ((2)D) gas chromatography (GC) coupled to a hybrid quadrupole time-of-flight mass spectrometer (Q-TOF-MS) using electron ionization (EI) and positive chemical ionization (PCI), with parallel sulfur chemiluminescence detection (SCD). In order to identify each individual sulfur compound, sequential heart-cuts of 28 sulfur fractions from (1)D GC to (2)D GC were performed with the three MS detection modes (SCD/EI-TOF-MS, SCD/PCI-TOF-MS, and SCD/PCI-Q-TOF-MS). Thirty sulfur compounds were positively identified by MS library search, linear retention indices (LRI), molecular mass determination using PCI accurate mass spectra, formula calculation using EI and PCI accurate mass spectra, and structure elucidation using collision activated dissociation (CAD) of the protonated molecule. Additionally, 11 molecular formulas were obtained for unknown sulfur compounds. The determined values of the identified and unknown sulfur compounds were in the range of 10-740ngmg total particulate matter (TPM) (RSD: 1.2-12%, n=3). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Memory for found targets interferes with subsequent performance in multiple-target visual search.
Cain, Matthew S; Mitroff, Stephen R
2013-10-01
Multiple-target visual searches--when more than 1 target can appear in a given search display--are commonplace in radiology, airport security screening, and the military. Whereas 1 target is often found accurately, additional targets are more likely to be missed in multiple-target searches. To better understand this decrement in 2nd-target detection, here we examined 2 potential forms of interference that can arise from finding a 1st target: interference from the perceptual salience of the 1st target (a now highly relevant distractor in a known location) and interference from a newly created memory representation for the 1st target. Here, we found that removing found targets from the display or making them salient and easily segregated color singletons improved subsequent search accuracy. However, replacing found targets with random distractor items did not improve subsequent search accuracy. Removing and highlighting found targets likely reduced both a target's visual salience and its memory load, whereas replacing a target removed its visual salience but not its representation in memory. Collectively, the current experiments suggest that the working memory load of a found target has a larger effect on subsequent search accuracy than does its perceptual salience. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Epipolar Rectification for CARTOSAT-1 Stereo Images Using SIFT and RANSAC
NASA Astrophysics Data System (ADS)
Akilan, A.; Sudheer Reddy, D.; Nagasubramanian, V.; Radhadevi, P. V.; Varadan, G.
2014-11-01
Cartosat-1 provides stereo images of spatial resolution 2.5 m with high fidelity of geometry. Stereo camera on the spacecraft has look angles of +26 degree and -5 degree respectively that yields effective along track stereo. Any DSM generation algorithm can use the stereo images for accurate 3D reconstruction and measurement of ground. Dense match points and pixel-wise matching are prerequisite in DSM generation to capture discontinuities and occlusions for accurate 3D modelling application. Epipolar image matching reduces the computational effort from two dimensional area searches to one dimensional. Thus, epipolar rectification is preferred as a pre-processing step for accurate DSM generation. In this paper we explore a method based on SIFT and RANSAC for epipolar rectification of cartosat-1 stereo images.
Data Recommender: An Alternative Way to Discover Open Scientific Datasets
NASA Astrophysics Data System (ADS)
Klump, J. F.; Devaraju, A.; Williams, G.; Hogan, D.; Davy, R.; Page, J.; Singh, D.; Peterson, N.
2017-12-01
Over the past few years, institutions and government agencies have adopted policies to openly release their data, which has resulted in huge amounts of open data becoming available on the web. When trying to discover the data, users face two challenges: an overload of choice and the limitations of the existing data search tools. On the one hand, there are too many datasets to choose from, and therefore, users need to spend considerable effort to find the datasets most relevant to their research. On the other hand, data portals commonly offer keyword and faceted search, which depend fully on the user queries to search and rank relevant datasets. Consequently, keyword and faceted search may return loosely related or irrelevant results, although the results may contain the same query. They may also return highly specific results that depend more on how well metadata was authored. They do not account well for variance in metadata due to variance in author styles and preferences. The top-ranked results may also come from the same data collection, and users are unlikely to discover new and interesting datasets. These search modes mainly suits users who can express their information needs in terms of the structure and terminology of the data portals, but may pose a challenge otherwise. The above challenges reflect that we need a solution that delivers the most relevant (i.e., similar and serendipitous) datasets to users, beyond the existing search functionalities on the portals. A recommender system is an information filtering system that presents users with relevant and interesting contents based on users' context and preferences. Delivering data recommendations to users can make data discovery easier, and as a result may enhance user engagement with the portal. We developed a hybrid data recommendation approach for the CSIRO Data Access Portal. The approach leverages existing recommendation techniques (e.g., content-based filtering and item co-occurrence) to produce similar and serendipitous data recommendations. It measures the relevance between datasets based on their properties, and search and download patterns. We evaluated the recommendation approach in a user study, and the obtained user judgments revealed the ability of the approach to accurately quantify the relevance of the datasets.
Gearing, Robin E; Lizardi, Dana
2009-09-01
Religion impacts suicidality. One's degree of religiosity can potentially serve as a protective factor against suicidal behavior. To accurately assess risk of suicide, it is imperative to understand the role of religion in suicidality. PsycINFO and MEDLINE databases were searched for published articles on religion and suicide between 1980 and 2008. Epidemiological data on suicidality across four religions, and the influence of religion on suicidality are presented. Practice guidelines are presented for incorporating religiosity into suicide risk assessment. Suicide rates and risk and protective factors for suicide vary across religions. It is essential to assess for degree of religious commitment and involvement to accurately identify suicide risk.
NASA Astrophysics Data System (ADS)
Kates-Harbeck, Julian; Tilloy, Antoine; Prentiss, Mara
2013-07-01
Inspired by RecA-protein-based homology recognition, we consider the pairing of two long linear arrays of binding sites. We propose a fully reversible, physically realizable biased random walk model for rapid and accurate self-assembly due to the spontaneous pairing of matching binding sites, where the statistics of the searched sample are included. In the model, there are two bound conformations, and the free energy for each conformation is a weakly nonlinear function of the number of contiguous matched bound sites.
Rotational frequencies of transition metal hydrides for astrophysical searches in the far-infrared
NASA Technical Reports Server (NTRS)
Brown, John M.; Beaton, Stuart P.; Evenson, Kenneth M.
1993-01-01
Accurate frequencies for the lowest rotational transitions of five transition metal hydrides (CrH, FeH, CoH, NiH, and CuH) in their ground electronic states are reported to help the identification of these species in astrophysical sources from their far-infrared spectra. Accurate frequencies are determined in two ways: for CuH, by calculation from rotational constants determined from higher J transitions with an accuracy of 190 kHz; for the other species, by extrapolation to zero magnetic field from laser magnetic resonance spectra with an accuracy of 0.7 MHz.
Assigning statistical significance to proteotypic peptides via database searches
Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo
2011-01-01
Querying MS/MS spectra against a database containing only proteotypic peptides reduces data analysis time due to reduction of database size. Despite the speed advantage, this search strategy is challenged by issues of statistical significance and coverage. The former requires separating systematically significant identifications from less confident identifications, while the latter arises when the underlying peptide is not present, due to single amino acid polymorphisms (SAPs) or post-translational modifications (PTMs), in the proteotypic peptide libraries searched. To address both issues simultaneously, we have extended RAId’s knowledge database to include proteotypic information, utilized RAId’s statistical strategy to assign statistical significance to proteotypic peptides, and modified RAId’s programs to allow for consideration of proteotypic information during database searches. The extended database alleviates the coverage problem since all annotated modifications, even those occurred within proteotypic peptides, may be considered. Taking into account the likelihoods of observation, the statistical strategy of RAId provides accurate E-value assignments regardless whether a candidate peptide is proteotypic or not. The advantage of including proteotypic information is evidenced by its superior retrieval performance when compared to regular database searches. PMID:21055489
NASA Astrophysics Data System (ADS)
Zhuang, Yufei; Huang, Haibin
2014-02-01
A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.
Directing the public to evidence-based online content.
Cooper, Crystale Purvis; Gelb, Cynthia A; Vaughn, Alexandra N; Smuland, Jenny; Hughes, Alexandra G; Hawkins, Nikki A
2015-04-01
To direct online users searching for gynecologic cancer information to accurate content, the Centers for Disease Control and Prevention's (CDC) 'Inside Knowledge: Get the Facts About Gynecologic Cancer' campaign sponsored search engine advertisements in English and Spanish. From June 2012 to August 2013, advertisements appeared when US Google users entered search terms related to gynecologic cancer. Users who clicked on the advertisements were directed to relevant content on the CDC website. Compared with the 3 months before the initiative (March-May 2012), visits to the CDC web pages linked to the advertisements were 26 times higher after the initiative began (June-August 2012) (p<0.01), and 65 times higher when the search engine advertisements were supplemented with promotion on television and additional websites (September 2012-August 2013) (p<0.01). Search engine advertisements can direct users to evidence-based content at a highly teachable moment--when they are seeking relevant information. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Automated Text Markup for Information Retrieval from an Electronic Textbook of Infectious Disease
Berrios, Daniel C.; Kehler, Andrew; Kim, David K.; Yu, Victor L.; Fagan, Lawrence M.
1998-01-01
The information needs of practicing clinicians frequently require textbook or journal searches. Making these sources available in electronic form improves the speed of these searches, but precision (i.e., the fraction of relevant to total documents retrieved) remains low. Improving the traditional keyword search by transforming search terms into canonical concepts does not improve search precision greatly. Kim et al. have designed and built a prototype system (MYCIN II) for computer-based information retrieval from a forthcoming electronic textbook of infectious disease. The system requires manual indexing by experts in the form of complex text markup. However, this mark-up process is time consuming (about 3 person-hours to generate, review, and transcribe the index for each of 218 chapters). We have designed and implemented a system to semiautomate the markup process. The system, information extraction for semiautomated indexing of documents (ISAID), uses query models and existing information-extraction tools to provide support for any user, including the author of the source material, to mark up tertiary information sources quickly and accurately.
NASA Astrophysics Data System (ADS)
Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias
2018-04-01
This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.
Accurate measurement of the first excited nuclear state in 235U
NASA Astrophysics Data System (ADS)
Ponce, F.; Swanberg, E.; Burke, J.; Henderson, R.; Friedrich, S.
2018-05-01
We have used superconducting high-resolution radiation detectors to measure the energy level of metastable Um235 as 76.737 ± 0.018 eV. The Um235 isomer is created from the α decay of 239Pu and embedded directly into the detector. When the Um235 subsequently decays, the energy is fully contained within the detector and is independent of the decay mode or the chemical state of the uranium. The detector is calibrated using an energy comb from a pulsed UV laser. A comparable measurement of the metastable Thm229 nucleus would enable a laser search for the exact transition energy in 229Th-Thm229 as a step towards developing the first ever nuclear (baryonic) clock.
Sarkar, Rashmi; Arora, Pooja; Garg, Vijay Kumar; Sonthalia, Sidharth; Gokhale, Narendra
2014-01-01
Melasma is an acquired pigmentary disorder characterized by symmetrical hyperpigmented macules on the face. Its pathogenesis is complex and involves the interplay of various factors such as genetic predisposition, ultraviolet radiation, hormonal factors, and drugs. An insight into the pathogenesis is important to devise treatment modalities that accurately target the disease process and prevent relapses. Hydroquinone remains the gold standard of treatment though many newer drugs, especially plant extracts, have been developed in the last few years. In this article, we review the pathogenetic factors involved in melasma. We also describe the newer treatment options available and their efficacy. We carried out a PubMed search using the following terms “melasma, pathogenesis, etiology, diagnosis, treatment” and have included data of the last few years. PMID:25396123
NASA Astrophysics Data System (ADS)
Tashakkori, H.; Rajabifard, A.; Kalantari, M.
2016-10-01
Search and rescue procedures for indoor environments are quite complicated due to the fact that much of the indoor information is unavailable to rescuers before physical entrance to the incident scene. Thus, decision making regarding the number of crew required and the way they should be dispatched in the building considering the various access points and complexities in the buildings in order to cover the search area in minimum time is dependent on prior knowledge and experience of the emergency commanders. Hence, this paper introduces the Search and Rescue Problem (SRP) which aims at finding best search and rescue routes that minimize the overall search time in the buildings. 3D BIM-oriented indoor GIS is integrated in the indoor route graph to find accurate routes based on the building geometric and semantic information. An Ant Colony Based Algorithm is presented that finds the number of first responders required and their individual routes to search all rooms and points of interest inside the building to minimize the overall time spent by all rescuers inside the disaster area. The evaluation of the proposed model for a case study building shows a significant improve in search and rescue time which will lead to a higher chance of saving lives and less exposure of emergency crew to danger.
A Knowledge Database on Thermal Control in Manufacturing Processes
NASA Astrophysics Data System (ADS)
Hirasawa, Shigeki; Satoh, Isao
A prototype version of a knowledge database on thermal control in manufacturing processes, specifically, molding, semiconductor manufacturing, and micro-scale manufacturing has been developed. The knowledge database has search functions for technical data, evaluated benchmark data, academic papers, and patents. The database also displays trends and future roadmaps for research topics. It has quick-calculation functions for basic design. This paper summarizes present research topics and future research on thermal control in manufacturing engineering to collate the information to the knowledge database. In the molding process, the initial mold and melt temperatures are very important parameters. In addition, thermal control is related to many semiconductor processes, and the main parameter is temperature variation in wafers. Accurate in-situ temperature measurment of wafers is important. And many technologies are being developed to manufacture micro-structures. Accordingly, the knowledge database will help further advance these technologies.
2017-01-01
Background Patient and consumer access to eHealth information is of crucial importance because of its role in patient-centered medicine and to improve knowledge about general aspects of health and medical topics. Objectives The objectives were to analyze and compare eHealth search patterns in a private (United States) and a public (United Kingdom) health care market. Methods A new taxonomy of eHealth websites is proposed to organize the largest eHealth websites. An online measurement framework is developed that provides a precise and detailed measurement system. Online panel data are used to accurately track and analyze detailed search behavior across 100 of the largest eHealth websites in the US and UK health care markets. Results The health, medical, and lifestyle categories account for approximately 90% of online activity, and e-pharmacies, social media, and professional categories account for the remaining 10% of online activity. Overall search penetration of eHealth websites is significantly higher in the private (United States) than the public market (United Kingdom). Almost twice the number of eHealth users in the private market have adopted online search in the health and lifestyle categories and also spend more time per website than those in the public market. The use of medical websites for specific conditions is almost identical in both markets. The allocation of search effort across categories is similar in both the markets. For all categories, the vast majority of eHealth users only access one website within each category. Those that conduct a search of two or more websites display very narrow search patterns. All users spend relatively little time on eHealth, that is, 3-7 minutes per website. Conclusions The proposed online measurement framework exploits online panel data to provide a powerful and objective method of analyzing and exploring eHealth behavior. The private health care system does appear to have an influence on eHealth search behavior in terms of search penetration and time spent per website in the health and lifestyle categories. Two explanations are offered: (1) the personal incentive of medical costs in the private market incentivizes users to conduct online search; and (2) health care information is more easily accessible through health care professionals in the United Kingdom compared with the United States. However, the use of medical websites is almost identical, suggesting that patients interested in a specific condition have a motivation to search and evaluate health information, irrespective of the health care market. The relatively low level of search in terms of the number of websites accessed and the average time per website raise important questions about the actual level of patient informedness in both the markets. Areas for future research are outlined. PMID:28408362
Simple heuristics in over-the-counter drug choices: a new hint for medical education and practice.
Riva, Silvia; Monti, Marco; Antonietti, Alessandro
2011-01-01
Over-the-counter (OTC) drugs are widely available and often purchased by consumers without advice from a health care provider. Many people rely on self-management of medications to treat common medical conditions. Although OTC medications are regulated by the National and the International Health and Drug Administration, many people are unaware of proper dosing, side effects, adverse drug reactions, and possible medication interactions. This study examined how subjects make their decisions to select an OTC drug, evaluating the role of cognitive heuristics which are simple and adaptive rules that help the decision-making process of people in everyday contexts. By analyzing 70 subjects' information-search and decision-making behavior when selecting OTC drugs, we examined the heuristics they applied in order to assess whether simple decision-making processes were also accurate and relevant. Subjects were tested with a sequence of two experimental tests based on a computerized Java system devised to analyze participants' choices in a virtual environment. We found that subjects' information-search behavior reflected the use of fast and frugal heuristics. In addition, although the heuristics which correctly predicted subjects' decisions implied significantly fewer cues on average than the subjects did in the information-search task, they were accurate in describing order of information search. A simple combination of a fast and frugal tree and a tallying rule predicted more than 78% of subjects' decisions. The current emphasis in health care is to shift some responsibility onto the consumer through expansion of self medication. To know which cognitive mechanisms are behind the choice of OTC drugs is becoming a relevant purpose of current medical education. These findings have implications both for the validity of simple heuristics describing information searches in the field of OTC drug choices and for current medical education, which has to prepare competent health specialists to orientate and support the choices of their patients.
Simple heuristics in over-the-counter drug choices: a new hint for medical education and practice
Riva, Silvia; Monti, Marco; Antonietti, Alessandro
2011-01-01
Introduction Over-the-counter (OTC) drugs are widely available and often purchased by consumers without advice from a health care provider. Many people rely on self-management of medications to treat common medical conditions. Although OTC medications are regulated by the National and the International Health and Drug Administration, many people are unaware of proper dosing, side effects, adverse drug reactions, and possible medication interactions. Purpose This study examined how subjects make their decisions to select an OTC drug, evaluating the role of cognitive heuristics which are simple and adaptive rules that help the decision-making process of people in everyday contexts. Subjects and methods By analyzing 70 subjects’ information-search and decision-making behavior when selecting OTC drugs, we examined the heuristics they applied in order to assess whether simple decision-making processes were also accurate and relevant. Subjects were tested with a sequence of two experimental tests based on a computerized Java system devised to analyze participants’ choices in a virtual environment. Results We found that subjects’ information-search behavior reflected the use of fast and frugal heuristics. In addition, although the heuristics which correctly predicted subjects’ decisions implied significantly fewer cues on average than the subjects did in the information-search task, they were accurate in describing order of information search. A simple combination of a fast and frugal tree and a tallying rule predicted more than 78% of subjects’ decisions. Conclusion The current emphasis in health care is to shift some responsibility onto the consumer through expansion of self medication. To know which cognitive mechanisms are behind the choice of OTC drugs is becoming a relevant purpose of current medical education. These findings have implications both for the validity of simple heuristics describing information searches in the field of OTC drug choices and for current medical education, which has to prepare competent health specialists to orientate and support the choices of their patients. PMID:23745077
Search for gravitational waves from LIGO-Virgo science run and data interpretation
NASA Astrophysics Data System (ADS)
Biswas, Rahul
Search for gravitational wave events was performed on data jointly taken during LIGO's fifth science run (S5) and Virgo's first science mn (VSR1). The data taken during this period was broken down into five separate months. I shall report the analysis performed on one of these months. Apart from the search, I shall describe the work related to estimation of rate based on the loudest event in the search. I shall demonstrate methods used in construction of rate intervals at 90% confidence level and combination of rates from multiple experiments of similar duration. To have confidence in our detection, accurate estimation of false alarm probability (F.A.P.) associated with the event candidate is required. Current false alarm estimation techniques limit our ability to measure the F.A.P. to about 1 in 100. I shall describe a method that significantly improves this estimate using information from multiple detectors. Besides accurate knowledge of F.A.P., detection is also dependent on our ability to distinguish real signals to those from noise. Several tests exist which use the quality of the signal to differentiate between real and noise signal. The chi-square test is one such computationally expensive test applied in our search; we shall understand the dependence of the chi-square parameter on the signal to noise ratio (SNR) for a given signal, which will help us to model the chi-square parameter based on SNR. The two detectors at Hanford, WA, H1(4km) and H2(2km), share the same vacuum system and hence their noise is correlated. Our present method of background estimation cannot capture this correlation and often underestimates the background when only H1 and H2 are operating. I shall describe a novel method of time reversed filtering to correctly estimate the background.
Chen, Chen Hsiu; Kuo, Su Ching; Tang, Siew Tzuh
2017-05-01
No systematic meta-analysis is available on the prevalence of cancer patients' accurate prognostic awareness and differences in accurate prognostic awareness by publication year, region, assessment method, and service received. To examine the prevalence of advanced/terminal cancer patients' accurate prognostic awareness and differences in accurate prognostic awareness by publication year, region, assessment method, and service received. Systematic review and meta-analysis. MEDLINE, Embase, The Cochrane Library, CINAHL, and PsycINFO were systematically searched on accurate prognostic awareness in adult patients with advanced/terminal cancer (1990-2014). Pooled prevalences were calculated for accurate prognostic awareness by a random-effects model. Differences in weighted estimates of accurate prognostic awareness were compared by meta-regression. In total, 34 articles were retrieved for systematic review and meta-analysis. At best, only about half of advanced/terminal cancer patients accurately understood their prognosis (49.1%; 95% confidence interval: 42.7%-55.5%; range: 5.4%-85.7%). Accurate prognostic awareness was independent of service received and publication year, but highest in Australia, followed by East Asia, North America, and southern Europe and the United Kingdom (67.7%, 60.7%, 52.8%, and 36.0%, respectively; p = 0.019). Accurate prognostic awareness was higher by clinician assessment than by patient report (63.2% vs 44.5%, p < 0.001). Less than half of advanced/terminal cancer patients accurately understood their prognosis, with significant variations by region and assessment method. Healthcare professionals should thoroughly assess advanced/terminal cancer patients' preferences for prognostic information and engage them in prognostic discussion early in the cancer trajectory, thus facilitating their accurate prognostic awareness and the quality of end-of-life care decision-making.
Bat detective-Deep learning tools for bat acoustic signal detection.
Mac Aodha, Oisin; Gibb, Rory; Barlow, Kate E; Browning, Ella; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R; Newson, Stuart E; Pandourski, Ivan; Parsons, Stuart; Russ, Jon; Szodoray-Paradi, Abigel; Szodoray-Paradi, Farkas; Tilova, Elena; Girolami, Mark; Brostow, Gabriel; Jones, Kate E
2018-03-01
Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio.
Bat detective—Deep learning tools for bat acoustic signal detection
Barlow, Kate E.; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R.; Newson, Stuart E.; Pandourski, Ivan; Russ, Jon; Szodoray-Paradi, Abigel; Tilova, Elena; Girolami, Mark; Jones, Kate E.
2018-01-01
Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio. PMID:29518076
Clark, Alex M; Bunin, Barry A; Litterman, Nadia K; Schürer, Stephan C; Visser, Ubbo
2014-01-01
Bioinformatics and computer aided drug design rely on the curation of a large number of protocols for biological assays that measure the ability of potential drugs to achieve a therapeutic effect. These assay protocols are generally published by scientists in the form of plain text, which needs to be more precisely annotated in order to be useful to software methods. We have developed a pragmatic approach to describing assays according to the semantic definitions of the BioAssay Ontology (BAO) project, using a hybrid of machine learning based on natural language processing, and a simplified user interface designed to help scientists curate their data with minimum effort. We have carried out this work based on the premise that pure machine learning is insufficiently accurate, and that expecting scientists to find the time to annotate their protocols manually is unrealistic. By combining these approaches, we have created an effective prototype for which annotation of bioassay text within the domain of the training set can be accomplished very quickly. Well-trained annotations require single-click user approval, while annotations from outside the training set domain can be identified using the search feature of a well-designed user interface, and subsequently used to improve the underlying models. By drastically reducing the time required for scientists to annotate their assays, we can realistically advocate for semantic annotation to become a standard part of the publication process. Once even a small proportion of the public body of bioassay data is marked up, bioinformatics researchers can begin to construct sophisticated and useful searching and analysis algorithms that will provide a diverse and powerful set of tools for drug discovery researchers.
Bunin, Barry A.; Litterman, Nadia K.; Schürer, Stephan C.; Visser, Ubbo
2014-01-01
Bioinformatics and computer aided drug design rely on the curation of a large number of protocols for biological assays that measure the ability of potential drugs to achieve a therapeutic effect. These assay protocols are generally published by scientists in the form of plain text, which needs to be more precisely annotated in order to be useful to software methods. We have developed a pragmatic approach to describing assays according to the semantic definitions of the BioAssay Ontology (BAO) project, using a hybrid of machine learning based on natural language processing, and a simplified user interface designed to help scientists curate their data with minimum effort. We have carried out this work based on the premise that pure machine learning is insufficiently accurate, and that expecting scientists to find the time to annotate their protocols manually is unrealistic. By combining these approaches, we have created an effective prototype for which annotation of bioassay text within the domain of the training set can be accomplished very quickly. Well-trained annotations require single-click user approval, while annotations from outside the training set domain can be identified using the search feature of a well-designed user interface, and subsequently used to improve the underlying models. By drastically reducing the time required for scientists to annotate their assays, we can realistically advocate for semantic annotation to become a standard part of the publication process. Once even a small proportion of the public body of bioassay data is marked up, bioinformatics researchers can begin to construct sophisticated and useful searching and analysis algorithms that will provide a diverse and powerful set of tools for drug discovery researchers. PMID:25165633
Solar-based navigation for robotic explorers
NASA Astrophysics Data System (ADS)
Shillcutt, Kimberly Jo
2000-12-01
This thesis introduces the application of solar position and shadowing information to robotic exploration. Power is a critical resource for robots with remote, long-term missions, so this research focuses on the power generation capabilities of robotic explorers during navigational tasks, in addition to power consumption. Solar power is primarily considered, with the possibility of wind power also contemplated. Information about the environment, including the solar ephemeris, terrain features, time of day, and surface location, is incorporated into a planning structure, allowing robots to accurately predict shadowing and thus potential costs and gains during navigational tasks. By evaluating its potential to generate and expend power, a robot can extend its lifetime and accomplishments. The primary tasks studied are coverage patterns, with a variety of plans developed for this research. The use of sun, terrain and temporal information also enables new capabilities of identifying and following sun-synchronous and sun-seeking paths. Digital elevation maps are combined with an ephemeris algorithm to calculate the altitude and azimuth of the sun from surface locations, and to identify and map shadows. Solar navigation path simulators use this information to perform searches through two-dimensional space, while considering temporal changes. Step by step simulations of coverage patterns also incorporate time in addition to location. Evaluations of solar and wind power generation, power consumption, area coverage, area overlap, and time are generated for sets of coverage patterns, with on-board environmental information linked to the simulations. This research is implemented on the Nomad robot for the Robotic Antarctic Meteorite Search. Simulators have been developed for coverage pattern tests, as well as for sun-synchronous and sun-seeking path searches. Results of field work and simulations are reported and analyzed, with demonstrated improvements in efficiency, productivity and lifetime of robotic explorers, along with new solar navigation abilities.
Enhanced sensitivity of CpG island search and primer design based on predicted CpG island position.
Park, Hyun-Chul; Ahn, Eu-Ree; Jung, Ju Yeon; Park, Ji-Hye; Lee, Jee Won; Lim, Si-Keun; Kim, Won
2018-05-01
DNA methylation has important biological roles, such as gene expression regulation, as well as practical applications in forensics, such as in body fluid identification and age estimation. DNA methylation often occurs in the CpG site, and methylation within the CpG islands affects various cellular functions and is related to tissue-specific identification. Several programs have been developed to identify CpG islands; however, the size, location, and number of predicted CpG islands are not identical due to different search algorithms. In addition, they only provide structural information for predicted CpG islands without experimental information, such as primer design. We developed an analysis pipeline package, CpGPNP, to integrate CpG island prediction and primer design. CpGPNP predicts CpG islands more accurately and sensitively than other programs, and designs primers easily based on the predicted CpG island locations. The primer design function included standard, bisulfite, and methylation-specific PCR to identify the methylation of particular CpG sites. In this study, we performed CpG island prediction on all chromosomes and compared CpG island search performance of CpGPNP with other CpG island prediction programs. In addition, we compared the position of primers designed for a specific region within the predicted CpG island using other bisulfite PCR primer programs. The primers designed by CpGPNP were used to experimentally verify the amplification of the target region of markers for body fluid identification and age estimation. CpGPNP is freely available at http://forensicdna.kr/cpgpnp/. Copyright © 2018 Elsevier B.V. All rights reserved.
Movallali, Guita; Sajedi, Firoozeh
2014-03-01
The use of the internet as a source of information gathering, self-help and support is becoming increasingly recognized. Parents and professionals of children with hearing impairment have been shown to seek information about different communication approaches online. Cued Speech is a very new approach to Persian speaking pupils. Our aim was to develop a useful website to give related information about Persian Cued Speech to parents and professionals of children with hearing impairment. All Cued Speech websites from different countries that fell within the first ten pages of Google and Yahoo search-engines were assessed. Main subjects and links were studied. All related information was gathered from the websites, textbooks, articles etc. Using a framework that combined several criteria for health-information websites, we developed the Persian Cued Speech website for three distinct audiences (parents, professionals and children). An accurate, complete, accessible and readable resource about Persian Cued Speech for parents and professionals is available now.
Tourette Syndrome and Tic Disorders
Leckman, James F.
2005-01-01
Objective: This is a practical review of Tourette syndrome, including phenomenology, natural history, and state-of-the-art assessment and treatment. Method: Computerized literature searches were conducted under the keywords Tourette syndrome,tics, and children-adolescents. Results: Studies have documented the natural history of Tourette syndrome and its frequent co-occurrence with attention problems, obsessive-compulsive disorder (OCD), and a range of other mood and anxiety disorders, which are often of primary concern to patients and their families. Proper diagnosis and education are often very helpful for patients, parents, siblings, teachers, and peers. When necessary, available anti-tic treatments have proven efficacious. First-line options include the alpha adrenergic agents and the atypical neuroleptics, as well as behavioral interventions such as habit reversal. Conclusions: The study of tics and Tourette symdrome has led to the development of several pathophysiological models and helped in the development of management options. However, fully explanatory models are still needed that would allow for accurate prognostication in the course of illness and the development of improved treatments. PMID:21152158
Burk, Joshua A.; Fleckenstein, Katarina; Kozikowski, C. Teal
2018-01-01
The current work examined the unique contribution that autistic traits and social anxiety have on tasks examining attention and emotion processing. In Study 1, 119 typically-developing college students completed a flanker task assessing the control of attention to target faces and away from distracting faces during emotion identification. In Study 2, 208 typically-developing college students performed a visual search task which required identification of whether a series of 8 or 16 emotional faces depicted the same or different emotions. Participants with more self-reported autistic traits performed more slowly on the flanker task in Study 1 than those with fewer autistic traits when stimuli depicted complex emotions. In Study 2, participants higher in social anxiety performed less accurately on trials showing all complex faces; participants with autistic traits showed no differences. These studies suggest that traits related to autism and to social anxiety differentially impact social cognitive processing. PMID:29596523
Toews, Lorraine C
2017-07-01
Complete, accurate reporting of systematic reviews facilitates assessment of how well reviews have been conducted. The primary objective of this study was to examine compliance of systematic reviews in veterinary journals with Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines for literature search reporting and to examine the completeness, bias, and reproducibility of the searches in these reviews from what was reported. The second objective was to examine reporting of the credentials and contributions of those involved in the search process. A sample of systematic reviews or meta-analyses published in veterinary journals between 2011 and 2015 was obtained by searching PubMed. Reporting in the full text of each review was checked against certain PRISMA checklist items. Over one-third of reviews (37%) did not search the CAB Abstracts database, and 9% of reviews searched only 1 database. Over two-thirds of reviews (65%) did not report any search for grey literature or stated that they excluded grey literature. The majority of reviews (95%) did not report a reproducible search strategy. Most reviews had significant deficiencies in reporting the search process that raise questions about how these searches were conducted and ultimately cast serious doubts on the validity and reliability of reviews based on a potentially biased and incomplete body of literature. These deficiencies also highlight the need for veterinary journal editors and publishers to be more rigorous in requiring adherence to PRISMA guidelines and to encourage veterinary researchers to include librarians or information specialists on systematic review teams to improve the quality and reporting of searches.
Bompastore, Nicholas J; Cisu, Theodore; Holoch, Peter
2018-04-30
To characterize available information about Peyronie disease online and evaluate its readability, quality, accuracy, and respective associations with HONcode certification and website category. The search term "Peyronie disease" was queried on 3 major search engines (Google, Bing, and Yahoo) and the first 50 search results on each search engine were assessed. All websites were categorized as institutional or reference, commercial, charitable, personal or patient support, or alternative medicine, and cross-referenced with the Health on the Net (HON) Foundation. Websites that met the inclusion criteria were analyzed for readability using 3 validated algorithms, for quality using the DISCERN instrument, and for accuracy by a fellowship-trained urologist. On average, online health information about treatment of Peyronie disease is written at or above the 11th grade level, exceeding the current reading guidelines of 6th-8th grade. The mean total DISCERN score for all website categories was 50.44 (standard deviation [SD] 11.94), the upper range of "fair" quality. The mean accuracy score of all online Peyronie treatment information was 2.76 (SD 1.23), corresponding to only 25%-50% accurate information. Both institutional or reference and HONcode-certified websites were of "good" quality (53.44, SD 11.64 and 60.86, SD 8.74, respectively). Institutional or reference websites were 50%-75% accurate (3.13, SD 1.20). Most of the online Peyronie disease treatment information is of mediocre quality and accuracy. The information from institutional or reference websites is of better quality and accuracy, and the information from HONcode-certified websites is of better quality. The mean readability of all websites exceeds the reading ability of most US adults by several grade levels. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Geng; Zhou, Kexin; Zhang, Yeming
2018-04-01
The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.
Electroencephalography epilepsy classifications using hybrid cuckoo search and neural network
NASA Astrophysics Data System (ADS)
Pratiwi, A. B.; Damayanti, A.; Miswanto
2017-07-01
Epilepsy is a condition that affects the brain and causes repeated seizures. This seizure is episodes that can vary and nearly undetectable to long periods of vigorous shaking or brain contractions. Epilepsy often can be confirmed with an electrocephalography (EEG). Neural Networks has been used in biomedic signal analysis, it has successfully classified the biomedic signal, such as EEG signal. In this paper, a hybrid cuckoo search and neural network are used to recognize EEG signal for epilepsy classifications. The weight of the multilayer perceptron is optimized by the cuckoo search algorithm based on its error. The aim of this methods is making the network faster to obtained the local or global optimal then the process of classification become more accurate. Based on the comparison results with the traditional multilayer perceptron, the hybrid cuckoo search and multilayer perceptron provides better performance in term of error convergence and accuracy. The purpose methods give MSE 0.001 and accuracy 90.0 %.
O'Gorman, Thomas W
2018-05-01
In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.
Support patient search on pathology reports with interactive online learning based data extraction.
Zheng, Shuai; Lu, James J; Appin, Christina; Brat, Daniel; Wang, Fusheng
2015-01-01
Structural reporting enables semantic understanding and prompt retrieval of clinical findings about patients. While synoptic pathology reporting provides templates for data entries, information in pathology reports remains primarily in narrative free text form. Extracting data of interest from narrative pathology reports could significantly improve the representation of the information and enable complex structured queries. However, manual extraction is tedious and error-prone, and automated tools are often constructed with a fixed training dataset and not easily adaptable. Our goal is to extract data from pathology reports to support advanced patient search with a highly adaptable semi-automated data extraction system, which can adjust and self-improve by learning from a user's interaction with minimal human effort. We have developed an online machine learning based information extraction system called IDEAL-X. With its graphical user interface, the system's data extraction engine automatically annotates values for users to review upon loading each report text. The system analyzes users' corrections regarding these annotations with online machine learning, and incrementally enhances and refines the learning model as reports are processed. The system also takes advantage of customized controlled vocabularies, which can be adaptively refined during the online learning process to further assist the data extraction. As the accuracy of automatic annotation improves overtime, the effort of human annotation is gradually reduced. After all reports are processed, a built-in query engine can be applied to conveniently define queries based on extracted structured data. We have evaluated the system with a dataset of anatomic pathology reports from 50 patients. Extracted data elements include demographical data, diagnosis, genetic marker, and procedure. The system achieves F-1 scores of around 95% for the majority of tests. Extracting data from pathology reports could enable more accurate knowledge to support biomedical research and clinical diagnosis. IDEAL-X provides a bridge that takes advantage of online machine learning based data extraction and the knowledge from human's feedback. By combining iterative online learning and adaptive controlled vocabularies, IDEAL-X can deliver highly adaptive and accurate data extraction to support patient search.
Accurate millimetre and submillimetre rest frequencies for cis- and trans-dithioformic acid, HCSSH
NASA Astrophysics Data System (ADS)
Prudenzano, D.; Laas, J.; Bizzocchi, L.; Lattanzi, V.; Endres, C.; Giuliano, B. M.; Spezzano, S.; Palumbo, M. E.; Caselli, P.
2018-04-01
Context. A better understanding of sulphur chemistry is needed to solve the interstellar sulphur depletion problem. A way to achieve this goal is to study new S-bearing molecules in the laboratory, obtaining accurate rest frequencies for an astronomical search. We focus on dithioformic acid, HCSSH, which is the sulphur analogue of formic acid. Aims: The aim of this study is to provide an accurate line list of the two HCSSH trans and cis isomers in their electronic ground state and a comprehensive centrifugal distortion analysis with an extension of measurements in the millimetre and submillimetre range. Methods: We studied the two isomers in the laboratory using an absorption spectrometer employing the frequency-modulation technique. The molecules were produced directly within a free-space cell by glow discharge of a gas mixture. We measured lines belonging to the electronic ground state up to 478 GHz, with a total number of 204 and 139 new rotational transitions, respectively, for trans and cis isomers. The final dataset also includes lines in the centimetre range available from literature. Results: The extension of the measurements in the mm and submm range lead to an accurate set of rotational and centrifugal distortion parameters. This allows us to predict frequencies with estimated uncertainties as low as 5 kHz at 1 mm wavelength. Hence, the new dataset provided by this study can be used for astronomical search. Frequency lists are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/612/A56
ERIC Educational Resources Information Center
Webber, Nancy
2004-01-01
Many art teachers use the Web as an information source. Overall, they look for good content that is clearly written concise, accurate, and pertinent. A well-designed site gives users what they want quickly, efficiently, and logically, and does not ask them to assemble a puzzle to resolve their search. How can websites with these qualities be…
Numerical Prediction of Pitch Damping Stability Derivatives for Finned Projectiles
2013-11-01
in part by a grant of high-performance computing time from the U.S. DOD High Performance Computing Modernization Program (HPCMP) at the Army...to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...12 3.3.2 Time -Accurate Simulations
Eyewitness Identification Accuracy and Response Latency: The Unruly 10-12-Second Rule
ERIC Educational Resources Information Center
Weber, Nathan; Brewer, Neil; Wells, Gary L.; Semmler, Carolyn; Keast, Amber
2004-01-01
Data are reported from 3,213 research eyewitnesses confirming that accurate eyewitness identifications from lineups are made faster than are inaccurate identifications. However, consistent with predictions from the recognition and search literatures, the authors did not find support for the "10-12-s rule" in which lineup identifications faster…
A Fortran Program to Aid in Mineral Identification Using Optical Properties.
ERIC Educational Resources Information Center
Blanchard, Frank N.
1980-01-01
Describes a search and match computer program which retreives from a user-generated mineral file those minerals which are not incompatible with the observed or measured optical properties of an unknown. Careful selection of input lists make it unlikely that the program will fail when reasonably accurate observations are recorded. (Author/JN)
Systematic Review and Consensus Guidelines for Environmental Sampling of Burkholderia pseudomallei
Limmathurotsakul, Direk; Dance, David A. B.; Wuthiekanun, Vanaporn; Kaestli, Mirjam; Mayo, Mark; Warner, Jeffrey; Wagner, David M.; Tuanyok, Apichai; Wertheim, Heiman; Yoke Cheng, Tan; Mukhopadhyay, Chiranjay; Puthucheary, Savithiri; Day, Nicholas P. J.; Steinmetz, Ivo; Currie, Bart J.; Peacock, Sharon J.
2013-01-01
Background Burkholderia pseudomallei, a Tier 1 Select Agent and the cause of melioidosis, is a Gram-negative bacillus present in the environment in many tropical countries. Defining the global pattern of B. pseudomallei distribution underpins efforts to prevent infection, and is dependent upon robust environmental sampling methodology. Our objective was to review the literature on the detection of environmental B. pseudomallei, update the risk map for melioidosis, and propose international consensus guidelines for soil sampling. Methods/Principal Findings An international working party (Detection of Environmental Burkholderia pseudomallei Working Party (DEBWorP)) was formed during the VIth World Melioidosis Congress in 2010. PubMed (January 1912 to December 2011) was searched using the following MeSH terms: pseudomallei or melioidosis. Bibliographies were hand-searched for secondary references. The reported geographical distribution of B. pseudomallei in the environment was mapped and categorized as definite, probable, or possible. The methodology used for detecting environmental B. pseudomallei was extracted and collated. We found that global coverage was patchy, with a lack of studies in many areas where melioidosis is suspected to occur. The sampling strategies and bacterial identification methods used were highly variable, and not all were robust. We developed consensus guidelines with the goals of reducing the probability of false-negative results, and the provision of affordable and ‘low-tech’ methodology that is applicable in both developed and developing countries. Conclusions/Significance The proposed consensus guidelines provide the basis for the development of an accurate and comprehensive global map of environmental B. pseudomallei. PMID:23556010
Meng, Xianshuang; Bai, Hua; Guo, Teng; Niu, Zengyuan; Ma, Qiang
2017-12-15
Comprehensive identification and quantitation of 100 multi-class regulated ingredients in cosmetics was achieved using ultra-high-performance liquid chromatography (UHPLC) coupled with hybrid quadrupole-Orbitrap high-resolution mass spectrometry (Q-Orbitrap HRMS). A simple, efficient, and inexpensive sample pretreatment protocol was developed using ultrasound-assisted extraction (UAE), followed by dispersive solid-phase extraction (dSPE). The cosmetic samples were analyzed by UHPLC-Q-Orbitrap HRMS under synchronous full-scan MS and data-dependent MS/MS (full-scan MS 1 /dd-MS 2 ) acquisition mode. The mass resolution was set to 70,000 FWHM (full width at half maximum) for full-scan MS 1 and 17,500 FWHM for dd-MS 2 stage with the experimentally measured mass deviations of less than 2ppm (parts per million) for quasi-molecular ions and 5ppm for characteristic fragment ions for each individual analyte. An accurate-mass database and a mass spectral library were built in house for searching the 100 target compounds. Broad screening was conducted by comparing the experimentally measured exact mass of precursor and fragment ions, retention time, isotopic pattern, and ionic ratio with the accurate-mass database and by matching the acquired MS/MS spectra against the mass spectral library. The developed methodology was evaluated and validated in terms of limits of detection (LODs), limits of quantitation (LOQs), linearity, stability, accuracy, and matrix effect. The UHPLC-Q-Orbitrap HRMS approach was applied for the analysis of 100 target illicit ingredients in 123 genuine cosmetic samples, and exhibited great potential for high-throughput, sensitive, and reliable screening of multi-class illicit compounds in cosmetics. Copyright © 2017 Elsevier B.V. All rights reserved.
Helicopter magnetic survey conducted to locate wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veloski, G.A.; Hammack, R.W.; Stamp, V.
2008-07-01
A helicopter magnetic survey was conducted in August 2007 over 15.6 sq mi at the Naval Petroleum Reserve No. 3’s (NPR-3) Teapot Dome Field near Casper, Wyoming. The survey’s purpose was to accurately locate wells drilled there during more than 90 years of continuous oilfield operation. The survey was conducted at low altitude and with closely spaced flight lines to improve the detection of wells with weak magnetic response and to increase the resolution of closely spaced wells. The survey was in preparation for a planned CO2 flood for EOR, which requires a complete well inventory with accurate locations formore » all existing wells. The magnetic survey was intended to locate wells missing from the well database and to provide accurate locations for all wells. The ability of the helicopter magnetic survey to accurately locate wells was accomplished by comparing airborne well picks with well locations from an intense ground search of a small test area.« less
Solid waste forecasting using modified ANFIS modeling.
Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; K N A, Maulud
2015-10-01
Solid waste prediction is crucial for sustainable solid waste management. Usually, accurate waste generation record is challenge in developing countries which complicates the modelling process. Solid waste generation is related to demographic, economic, and social factors. However, these factors are highly varied due to population and economy growths. The objective of this research is to determine the most influencing demographic and economic factors that affect solid waste generation using systematic approach, and then develop a model to forecast solid waste generation using a modified Adaptive Neural Inference System (MANFIS). The model evaluation was performed using Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and the coefficient of determination (R²). The results show that the best input variables are people age groups 0-14, 15-64, and people above 65 years, and the best model structure is 3 triangular fuzzy membership functions and 27 fuzzy rules. The model has been validated using testing data and the resulted training RMSE, MAE and R² were 0.2678, 0.045 and 0.99, respectively, while for testing phase RMSE =3.986, MAE = 0.673 and R² = 0.98. To date, a few attempts have been made to predict the annual solid waste generation in developing countries. This paper presents modeling of annual solid waste generation using Modified ANFIS, it is a systematic approach to search for the most influencing factors and then modify the ANFIS structure to simplify the model. The proposed method can be used to forecast the waste generation in such developing countries where accurate reliable data is not always available. Moreover, annual solid waste prediction is essential for sustainable planning.
orthoFind Facilitates the Discovery of Homologous and Orthologous Proteins.
Mier, Pablo; Andrade-Navarro, Miguel A; Pérez-Pulido, Antonio J
2015-01-01
Finding homologous and orthologous protein sequences is often the first step in evolutionary studies, annotation projects, and experiments of functional complementation. Despite all currently available computational tools, there is a requirement for easy-to-use tools that provide functional information. Here, a new web application called orthoFind is presented, which allows a quick search for homologous and orthologous proteins given one or more query sequences, allowing a recurrent and exhaustive search against reference proteomes, and being able to include user databases. It addresses the protein multidomain problem, searching for homologs with the same domain architecture, and gives a simple functional analysis of the results to help in the annotation process. orthoFind is easy to use and has been proven to provide accurate results with different datasets. Availability: http://www.bioinfocabd.upo.es/orthofind/.
Heat stroke internet searches can be a new heatwave health warning surveillance indicator
Li, Tiantian; Ding, Fan; Sun, Qinghua; Zhang, Yi; Kinney, Patrick L.
2016-01-01
The impact of major heatwave shocks on population morbidity and mortality has become an urgent public health concern. However, Current heatwave warning systems suffer from a lack of validation and an inability to provide accurate health risk warnings in a timely way. Here we conducted a correlation and linear regression analysis to test the relationship between heat stroke internet searches and heat stroke health outcomes in Shanghai, China, during the summer of 2013. We show that the resulting heatstroke index captures much of the variation in heat stroke cases and deaths. The correlation between heat stroke deaths, the search index and the incidence of heat stroke is higher than the correlation with maximum temperature. This study highlights a fast and effective heatwave health warning indicator with potential to be used throughout the world. PMID:27869135
Heat stroke internet searches can be a new heatwave health warning surveillance indicator
NASA Astrophysics Data System (ADS)
Li, Tiantian; Ding, Fan; Sun, Qinghua; Zhang, Yi; Kinney, Patrick L.
2016-11-01
The impact of major heatwave shocks on population morbidity and mortality has become an urgent public health concern. However, Current heatwave warning systems suffer from a lack of validation and an inability to provide accurate health risk warnings in a timely way. Here we conducted a correlation and linear regression analysis to test the relationship between heat stroke internet searches and heat stroke health outcomes in Shanghai, China, during the summer of 2013. We show that the resulting heatstroke index captures much of the variation in heat stroke cases and deaths. The correlation between heat stroke deaths, the search index and the incidence of heat stroke is higher than the correlation with maximum temperature. This study highlights a fast and effective heatwave health warning indicator with potential to be used throughout the world.
Micro and regular saccades across the lifespan during a visual search of "Where's Waldo" puzzles.
Port, Nicholas L; Trimberger, Jane; Hitzeman, Steve; Redick, Bryan; Beckerman, Stephen
2016-01-01
Despite the fact that different aspects of visual-motor control mature at different rates and aging is associated with declines in both sensory and motor function, little is known about the relationship between microsaccades and either development or aging. Using a sample of 343 individuals ranging in age from 4 to 66 and a task that has been shown to elicit a high frequency of microsaccades (solving Where's Waldo puzzles), we explored microsaccade frequency and kinematics (main sequence curves) as a function of age. Taking advantage of the large size of our dataset (183,893 saccades), we also address (a) the saccade amplitude limit at which video eye trackers are able to accurately measure microsaccades and (b) the degree and consistency of saccade kinematics at varying amplitudes and directions. Using a modification of the Engbert-Mergenthaler saccade detector, we found that even the smallest amplitude movements (0.25-0.5°) demonstrate basic saccade kinematics. With regard to development and aging, both microsaccade and regular saccade frequency exhibited a very small increase across the life span. Visual search ability, as per many other aspects of visual performance, exhibited a U-shaped function over the lifespan. Finally, both large horizontal and moderate vertical directional biases were detected for all saccade sizes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Choe, Sanggil; Kim, Suncheun; Choi, Hyeyoung; Choi, Hwakyoung; Chung, Heesun; Hwang, Bangyeon
2010-06-15
Agilent GC-MS MSD Chemstation offers automated library search report for toxicological screening using total ion chromatogram (TIC) and mass spectroscopy in normal mode. Numerous peaks appear in the chromatogram of biological specimen such as blood or urine and often large migrating peaks obscure small target peaks, in addition, any target peaks of low abundance regularly give wrong library search result or low matching score. As a result, retention time and mass spectrum of all the peaks in the chromatogram have to be checked to see if they are relevant. These repeated actions are very tedious and time-consuming to toxicologists. MSD Chemstation software operates using a number of macro files which give commands and instructions on how to work on and extract data from the chromatogram and spectroscopy. These macro files are developed by the own compiler of the software. All the original macro files can be modified and new macro files can be added to the original software by users. To get more accurate results with more convenient method and to save time for data analysis, we developed new macro files for reports generation and inserted new menus in the Enhanced Data Analysis program. Toxicological screening reports generated by these new macro files are in text mode or graphic mode and these reports can be generated with three different automated subtraction options. Text reports have Brief mode and Full mode and graphic reports have the option with or without mass spectrum mode. Matched mass spectrum and matching score for detected compounds are printed in reports by modified library searching modules. We have also developed an independent application program named DrugMan. This program manages drug groups, lists and parameters that are in use in MSD Chemstation. The incorporation of DrugMan with modified macro modules provides a powerful tool for toxicological screening and save a lot of valuable time on toxicological work. (c) 2010 Elsevier Ireland Ltd. All rights reserved.
Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil
2014-01-01
Background Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. Aims The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Materials and methods Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. Results The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. Conclusion We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI. PMID:24683515
NASA Astrophysics Data System (ADS)
Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming
2016-12-01
Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).
A Novel Residual Frequency Estimation Method for GNSS Receivers.
Nguyen, Tu Thi-Thanh; La, Vinh The; Ta, Tung Hai
2018-01-04
In Global Navigation Satellite System (GNSS) receivers, residual frequency estimation methods are traditionally applied in the synchronization block to reduce the transient time from acquisition to tracking, or they are used within the frequency estimator to improve its accuracy in open-loop architectures. There are several disadvantages in the current estimation methods, including sensitivity to noise and wide search space size. This paper proposes a new residual frequency estimation method depending on differential processing. Although the complexity of the proposed method is higher than the one of traditional methods, it can lead to more accurate estimates, without increasing the size of the search space.
Load forecast method of electric vehicle charging station using SVR based on GA-PSO
NASA Astrophysics Data System (ADS)
Lu, Kuan; Sun, Wenxue; Ma, Changhui; Yang, Shenquan; Zhu, Zijian; Zhao, Pengfei; Zhao, Xin; Xu, Nan
2017-06-01
This paper presents a Support Vector Regression (SVR) method for electric vehicle (EV) charging station load forecast based on genetic algorithm (GA) and particle swarm optimization (PSO). Fuzzy C-Means (FCM) clustering is used to establish similar day samples. GA is used for global parameter searching and PSO is used for a more accurately local searching. Load forecast is then regressed using SVR. The practical load data of an EV charging station were taken to illustrate the proposed method. The result indicates an obvious improvement in the forecasting accuracy compared with SVRs based on PSO and GA exclusively.
Laufer, Shlomi; D'Angelo, Anne-Lise D; Kwan, Calvin; Ray, Rebbeca D; Yudkowsky, Rachel; Boulet, John R; McGaghie, William C; Pugh, Carla M
2017-12-01
Develop new performance evaluation standards for the clinical breast examination (CBE). There are several, technical aspects of a proper CBE. Our recent work discovered a significant, linear relationship between palpation force and CBE accuracy. This article investigates the relationship between other technical aspects of the CBE and accuracy. This performance assessment study involved data collection from physicians (n = 553) attending 3 different clinical meetings between 2013 and 2014: American Society of Breast Surgeons, American Academy of Family Physicians, and American College of Obstetricians and Gynecologists. Four, previously validated, sensor-enabled breast models were used for clinical skills assessment. Models A and B had solitary, superficial, 2 cm and 1 cm soft masses, respectively. Models C and D had solitary, deep, 2 cm hard and moderately firm masses, respectively. Finger movements (search technique) from 1137 CBE video recordings were independently classified by 2 observers. Final classifications were compared with CBE accuracy. Accuracy rates were model A = 99.6%, model B = 89.7%, model C = 75%, and model D = 60%. Final classification categories for search technique included rubbing movement, vertical movement, piano fingers, and other. Interrater reliability was (k = 0.79). Rubbing movement was 4 times more likely to yield an accurate assessment (odds ratio 3.81, P < 0.001) compared with vertical movement and piano fingers. Piano fingers had the highest failure rate (36.5%). Regression analysis of search pattern, search technique, palpation force, examination time, and 6 demographic variables, revealed that search technique independently and significantly affected CBE accuracy (P < 0.001). Our results support measurement and classification of CBE techniques and provide the foundation for a new paradigm in teaching and assessing hands-on clinical skills. The newly described piano fingers palpation technique was noted to have unusually high failure rates. Medical educators should be aware of the potential differences in effectiveness for various CBE techniques.
Asking better questions: How presentation formats influence information search.
Wu, Charley M; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D
2017-08-01
While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search queries, each with binary probabilistic outcomes, with the goal of maximizing classification accuracy. We studied 14 different numerical and visual formats for presenting information about the search environment, constructed across 6 design features that have been prominently related to improvements in Bayesian reasoning accuracy (natural frequencies, posteriors, complement, spatial extent, countability, and part-to-whole information). The posterior variants of the icon array and bar graph formats led to the highest proportion of correct responses, and were substantially better than the standard probability format. Results suggest that presenting information in terms of posterior probabilities and visualizing natural frequencies using spatial extent (a perceptual feature) were especially helpful in guiding search decisions, although environments with a mixture of probabilistic and certain outcomes were challenging across all formats. Subjects who made more accurate probability judgments did not perform better on the search task, suggesting that simple decision heuristics may be used to make search decisions without explicitly applying Bayesian inference to compute probabilities. We propose a new take-the-difference (TTD) heuristic that identifies the accuracy-maximizing query without explicit computation of posterior probabilities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
BLAST and FASTA similarity searching for multiple sequence alignment.
Pearson, William R
2014-01-01
BLAST, FASTA, and other similarity searching programs seek to identify homologous proteins and DNA sequences based on excess sequence similarity. If two sequences share much more similarity than expected by chance, the simplest explanation for the excess similarity is common ancestry-homology. The most effective similarity searches compare protein sequences, rather than DNA sequences, for sequences that encode proteins, and use expectation values, rather than percent identity, to infer homology. The BLAST and FASTA packages of sequence comparison programs provide programs for comparing protein and DNA sequences to protein databases (the most sensitive searches). Protein and translated-DNA comparisons to protein databases routinely allow evolutionary look back times from 1 to 2 billion years; DNA:DNA searches are 5-10-fold less sensitive. BLAST and FASTA can be run on popular web sites, but can also be downloaded and installed on local computers. With local installation, target databases can be customized for the sequence data being characterized. With today's very large protein databases, search sensitivity can also be improved by searching smaller comprehensive databases, for example, a complete protein set from an evolutionarily neighboring model organism. By default, BLAST and FASTA use scoring strategies target for distant evolutionary relationships; for comparisons involving short domains or queries, or searches that seek relatively close homologs (e.g. mouse-human), shallower scoring matrices will be more effective. Both BLAST and FASTA provide very accurate statistical estimates, which can be used to reliably identify protein sequences that diverged more than 2 billion years ago.
Visuospatial working memory mediates inhibitory and facilitatory guidance in preview search.
Barrett, Doug J K; Shimozaki, Steven S; Jensen, Silke; Zobay, Oliver
2016-10-01
Visual search is faster and more accurate when a subset of distractors is presented before the display containing the target. This "preview benefit" has been attributed to separate inhibitory and facilitatory guidance mechanisms during search. In the preview task the temporal cues thought to elicit inhibition and facilitation provide complementary sources of information about the likely location of the target. In this study, we use a Bayesian observer model to compare sensitivity when the temporal cues eliciting inhibition and facilitation produce complementary, and competing, sources of information. Observers searched for T-shaped targets among L-shaped distractors in 2 standard and 2 preview conditions. In the standard conditions, all the objects in the display appeared at the same time. In the preview conditions, the initial subset of distractors either stayed on the screen or disappeared before the onset of the search display, which contained the target when present. In the latter, the synchronous onset of old and new objects negates the predictive utility of stimulus-driven capture during search. The results indicate observers combine memory-driven inhibition and sensory-driven capture to reduce spatial uncertainty about the target's likely location during search. In the absence of spatially predictive onsets, memory-driven inhibition at old locations persists despite irrelevant sensory change at previewed locations. This result is consistent with a bias toward unattended objects during search via the active suppression of irrelevant capture at previously attended locations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Geant4 Simulations for the Radon Electric Dipole Moment Search at TRIUMF
NASA Astrophysics Data System (ADS)
Rand, Evan; Bangay, Jack; Bianco, Laura; Dunlop, Ryan; Finlay, Paul; Garrett, Paul; Leach, Kyle; Phillips, Andrew; Svensson, Carl; Sumithrarachchi, Chandana; Wong, James
2010-11-01
The existence of a permanent electric dipole moment (EDM) requires the violation of time-reversal symmetry (T) or, equivalently, the violation of charge conjugation C and parity P (CP). Although no particle EDM has yet been found, current theories beyond the Standard Model, e.g. multiple-Higgs theories, left-right symmetry, and supersymmetry, predict EDMs within current experimental reach. In fact, present limits on the EDMs of the neutron, electron and ^199Hg atom have significantly reduced the parameter spaces of these models. The measurement of a non-zero EDM would be a direct measurement of the violation of time-reversal symmetry, and would represent a clear signal of new physics beyond the Standard Model. Recent theoretical calculations predict large enhancements in the atomic EDMs for atoms with octupole-deformed nuclei, making odd-A Rn isotopes prime candidates for the EDM search. The Geant4 simulations presented here are essential for the development towards an EDM measurement. They provide an accurate description of γ-ray scattering and backgrounds in the experimental apparatus, and are being used to study the overall sensitivity of the RnEDM experiment at TRIUMF in Vancouver, B.C.
Paes, Thaís; Machado, Felipe Vilaça Cavallari; Cavalheri, Vinícius; Pitta, Fabio; Hernandes, Nidia Aparecida
2017-07-01
People with chronic obstructive pulmonary disease (COPD) present symptoms such as dyspnea and fatigue, which hinder their performance in activities of daily living (ADL). A few multitask protocols have been developed to assess ADL performance in this population, although measurement properties of such protocols were not yet systematically reviewed. Areas covered: Studies were included if an assessment of the ability to perform ADL was conducted in people with COPD using a (objective) performance-based protocol. The search was conducted in the following databases: Pubmed, EMBASE, Cochrane Library, PEDro, CINAHL and LILACS. Furthermore, hand searches were conducted. Expert commentary: Up to this moment, only three protocols had measurement properties described: the Glittre ADL Test, the Monitored Functional Task Evaluation and the Londrina ADL Protocol were shown to be valid and reliable whereas only the Glittre ADL Test was shown to be responsive to change after pulmonary rehabilitation. These protocols can be used in laboratory settings and clinical practice to evaluate ADL performance in people with COPD, although there is need for more in-depth information on their validity, reliability and especially responsiveness due to the growing interest in the accurate assessment of ADL performance in this population.
FRIPON, the French fireball network
NASA Astrophysics Data System (ADS)
Colas, F.; Zanda, B.; Bouley, S.; Vaubaillon, J.; Marmo, C.; Audureau, Y.; Kwon, M. K.; Rault, J. L.; Caminade, S.; Vernazza, P.; Gattacceca, J.; Birlan, M.; Maquet, L.; Egal, A.; Rotaru, M.; Gruson-Daniel, Y.; Birnbaum, C.; Cochard, F.; Thizy, O.
2015-10-01
FRIPON (Fireball Recovery and InterPlanetary Observation Network) [4](Colas et al, 2014) was recently founded by ANR (Agence Nationale de la Recherche). Its aim is to connect meteoritical science with asteroidal and cometary science in order to better understand solar system formation and evolution. The main idea is to set up an observation network covering all the French territory to collect a large number of meteorites (one or two per year) with accurate orbits, allowing us to pinpoint possible parent bodies. 100 all-sky cameras will be installed at the end of 2015 forming a dense network with an average distance of 100km between stations. To maximize the accuracy of orbit determination, we will mix our optical data with radar data from the GRAVES beacon received by 25 stations [5](Rault et al, 2015). As both the setting up of the network and the creation of search teams for meteorites will need manpower beyond our small team of professionals, we are developing a citizen science network called Vigie-Ciel [6](Zanda et al, 2015). The public at large will thus be able to simply use our data, participate in search campaigns or even setup their own cameras.
Talent identification and specialization in sport: an overview of some unanswered questions.
Gonçalves C, E B; Rama L, M L; Figueiredo, António B
2012-12-01
The theory of deliberate practice postulates that experts are always made, not born. This theory translated to the youth-sport domain means that if athletes want to be high-level performers, they need to deliberately engage in practice during the specialization years, spending time wisely and always focusing on tasks that challenge current performance. Sport organizations in several countries around the world created specialized training centers where selected young talents practice under the supervision of experienced coaches in order to become professional athletes and integrate onto youth national teams. Early specialization and accurate observation by expert coaches or scouts remain the only tools to find a potential excellent athlete among a great number of participants. In the current study, the authors present 2 of the problems raised by talent search and the risks of such a search. Growth and maturation are important concepts to better understand the identification, selection, and development processes of young athletes. However, the literature suggests that sport-promoting strategies are being maintained despite the increased demands in the anthropometric characteristics of professional players and demands of actual professional soccer competitions. On the other hand, identifying biological variables that can predict performance is almost impossible.
Agent Collaborative Target Localization and Classification in Wireless Sensor Networks
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.
Kwon, Yoojin; Powelson, Susan E; Wong, Holly; Ghali, William A; Conly, John M
2014-11-11
The purpose of our study is to determine the value and efficacy of searching biomedical databases beyond MEDLINE for systematic reviews. We analyzed the results from a systematic review conducted by the authors and others on ward closure as an infection control practice. Ovid MEDLINE including In-Process & Other Non-Indexed Citations, Ovid Embase, CINAHL Plus, LILACS, and IndMED were systematically searched for articles of any study type discussing ward closure, as were bibliographies of selected articles and recent infection control conference abstracts. Search results were tracked, recorded, and analyzed using a relative recall method. The sensitivity of searching in each database was calculated. Two thousand ninety-five unique citations were identified and screened for inclusion in the systematic review: 2,060 from database searching and 35 from hand searching and other sources. Ninety-seven citations were included in the final review. MEDLINE and Embase searches each retrieved 80 of the 97 articles included, only 4 articles from each database were unique. The CINAHL search retrieved 35 included articles, and 4 were unique. The IndMED and LILACS searches did not retrieve any included articles, although 75 of the included articles were indexed in LILACS. The true value of using regional databases, particularly LILACS, may lie with the ability to search in the language spoken in the region. Eight articles were found only through hand searching. Identifying studies for a systematic review where the research is observational is complex. The value each individual study contributes to the review cannot be accurately measured. Consequently, we could not determine the value of results found from searching beyond MEDLINE, Embase, and CINAHL with accuracy. However, hand searching for serendipitous retrieval remains an important aspect due to indexing and keyword challenges inherent in this literature.
Adaptation of video game UVW mapping to 3D visualization of gene expression patterns
NASA Astrophysics Data System (ADS)
Vize, Peter D.; Gerth, Victor E.
2007-01-01
Analysis of gene expression patterns within an organism plays a critical role in associating genes with biological processes in both health and disease. During embryonic development the analysis and comparison of different gene expression patterns allows biologists to identify candidate genes that may regulate the formation of normal tissues and organs and to search for genes associated with congenital diseases. No two individual embryos, or organs, are exactly the same shape or size so comparing spatial gene expression in one embryo to that in another is difficult. We will present our efforts in comparing gene expression data collected using both volumetric and projection approaches. Volumetric data is highly accurate but difficult to process and compare. Projection methods use UV mapping to align texture maps to standardized spatial frameworks. This approach is less accurate but is very rapid and requires very little processing. We have built a database of over 180 3D models depicting gene expression patterns mapped onto the surface of spline based embryo models. Gene expression data in different models can easily be compared to determine common regions of activity. Visualization software, both Java and OpenGL optimized for viewing 3D gene expression data will also be demonstrated.
History and Trends of "Personal Health Record" Research in PubMed
Kim, Jeongeun; Bates, David W.
2011-01-01
Objectives The purpose of this study was to review history and trends of personal health record research in PubMed and to provide accurate understanding and categorical analysis of expert opinions. Methods For the search strategy, PubMed was queried for 'personal health record, personal record, and PHR' in the title and abstract fields. Those containing different definitions of the word were removed by one-by-one analysis from the results, 695 articles. In the end, total of 229 articles were analyzed in this research. Results The results show that the changes in terms over the years and the shift to patient centeredness and mixed usage. And we identified history and trend of PHR research in some category that the number of publications by year, topic, methodologies and target diseases. Also from analysis of MeSH terms, we can show the focal interest in regards the PHR boundaries and related subjects. Conclusions For PHRs to be efficiently used by general public, initial understanding of the history and trends of PHR research may be helpful. Simultaneously, accurate understanding and categorical analysis of expert opinions that can lead to the development and growth of PHRs will be valuable to their adoption and expansion. PMID:21818452
Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots.
Hajdin, Christine E; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W; Mathews, David H; Weeks, Kevin M
2013-04-02
A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2'-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified.
Swider, Brian W; Zimmerman, Ryan D; Barrick, Murray R
2015-05-01
Numerous studies link applicant fit perceptions measured at a single point in time to recruitment outcomes. Expanding upon this prior research by incorporating decision-making theory, this study examines how applicants develop these fit perceptions over the duration of the recruitment process, showing meaningful changes in fit perceptions across and within organizations overtime. To assess the development of applicant fit perceptions, eight assessments of person-organization (PO) fit with up to four different organizations across 169 applicants for 403 job choice decisions were analyzed. Results showed the presence of initial levels and changes in differentiation of applicant PO fit perceptions across organizations, which significantly predicted future job choice. In addition, changes in within-organizational PO fit perceptions across two stages of recruitment predicted applicant job choices among multiple employers. The implications of these results for accurately understanding the development of fit perceptions, relationships between fit perceptions and key recruiting outcomes, and possible limitations of past meta-analytically derived estimates of these relationships are discussed. (c) 2015 APA, all rights reserved.
Microfluidic diagnostics for low-resource settings
NASA Astrophysics Data System (ADS)
Hawkins, Kenneth R.; Weigl, Bernhard H.
2010-02-01
Diagnostics for low-resource settings need to be foremost inexpensive, but also accurate, reliable, rugged and suited to the contexts of the developing world. Diagnostics for global health, based on minimally-instrumented, microfluidicsbased platforms employing low-cost disposables, has become a very active research area recently-thanks, in part, to new funding from the Bill & Melinda Gates Foundation, the National Institutes of Health, and other sources. This has led to a number of interesting prototype devices that are now in advanced development or clinical validation. These devices include disposables and instruments that perform multiplexed PCR-based assays for enteric, febrile, and vaginal diseases, as well as immunoassays for diseases such as malaria, HIV, and various sexually transmitted diseases. More recently, instrument-free diagnostic disposables based on isothermal nucleic-acid amplification have been developed. Regardless of platform, however, the search for truly low-cost manufacturing methods that would enable affordable systems (at volume, in the appropriate context) remains a significant challenge. Here we give an overview of existing platform development efforts, present some original research in this area at PATH, and reiterate a call to action for more.
Comparing NEO Search Telescopes
NASA Astrophysics Data System (ADS)
Myhrvold, Nathan
2016-04-01
Multiple terrestrial and space-based telescopes have been proposed for detecting and tracking near-Earth objects (NEOs). Detailed simulations of the search performance of these systems have used complex computer codes that are not widely available, which hinders accurate cross-comparison of the proposals and obscures whether they have consistent assumptions. Moreover, some proposed instruments would survey infrared (IR) bands, whereas others would operate in the visible band, and differences among asteroid thermal and visible-light models used in the simulations further complicate like-to-like comparisons. I use simple physical principles to estimate basic performance metrics for the ground-based Large Synoptic Survey Telescope and three space-based instruments—Sentinel, NEOCam, and a Cubesat constellation. The performance is measured against two different NEO distributions, the Bottke et al. distribution of general NEOs, and the Veres et al. distribution of Earth-impacting NEO. The results of the comparison show simplified relative performance metrics, including the expected number of NEOs visible in the search volumes and the initial detection rates expected for each system. Although these simplified comparisons do not capture all of the details, they give considerable insight into the physical factors limiting performance. Multiple asteroid thermal models are considered, including FRM, NEATM, and a new generalized form of FRM. I describe issues with how IR albedo and emissivity have been estimated in previous studies, which may render them inaccurate. A thermal model for tumbling asteroids is also developed and suggests that tumbling asteroids may be surprisingly difficult for IR telescopes to observe.
Younger, Paula; Boddy, Kate
2009-06-01
The researchers involved in this study work at Exeter Health library and at the Complementary Medicine Unit, Peninsula School of Medicine and Dentistry (PCMD). Within this collaborative environment it is possible to access the electronic resources of three institutions. This includes access to AMED and other databases using different interfaces. The aim of this study was to investigate whether searching different interfaces to the AMED allied health and complementary medicine database produced the same results when using identical search terms. The following Internet-based AMED interfaces were searched: DIALOG DataStar; EBSCOhost and OVID SP_UI01.00.02. Search results from all three databases were saved in an endnote database to facilitate analysis. A checklist was also compiled comparing interface features. In our initial search, DIALOG returned 29 hits, OVID 14 and Ebsco 8. If we assume that DIALOG returned 100% of potential hits, OVID initially returned only 48% of hits and EBSCOhost only 28%. In our search, a researcher using the Ebsco interface to carry out a simple search on AMED would miss over 70% of possible search hits. Subsequent EBSCOhost searches on different subjects failed to find between 21 and 86% of the hits retrieved using the same keywords via DIALOG DataStar. In two cases, the simple EBSCOhost search failed to find any of the results found via DIALOG DataStar. Depending on the interface, the number of hits retrieved from the same database with the same simple search can vary dramatically. Some simple searches fail to retrieve a substantial percentage of citations. This may result in an uninformed literature review, research funding application or treatment intervention. In addition to ensuring that keywords, spelling and medical subject headings (MeSH) accurately reflect the nature of the search, database users should include wildcards and truncation and adapt their search strategy substantially to retrieve the maximum number of appropriate citations possible. Librarians should be aware of these differences when making purchasing decisions, carrying out literature searches and planning user education.
Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390
Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.
a Search for the HOCO Radical in the Massive Star-Forming Region Sgr B2(M)
NASA Astrophysics Data System (ADS)
Oyama, Takahiro; Araki, Mitsunori; Takano, Shuro; Kuze, Nobuhiko; Sumiyoshi, Yoshihiro; Tsukiyama, Koichi; Endo, Yasuki
2017-06-01
Despite importance of the origin of life, long lasting challenges to detect the simplest amino acid glycine (H_2NCH_2COOH) in interstellar medium has not been successful. As a preliminary step toward search for glycine, detection of its precursor has received attention. It is considered that glycine is produced by the reaction of the HOCO radical and the aminomethyl radical(CH_2NH_2) on interstellar grain surface: HOCO + CH_2NH_2 → H_2NCH_2COOH. (1) HOCO is produced by the reaction of OH + CO → HOCO and/or HCOOH → HOCO + H. However, HOCO and CH_2NH_2 have not been investigated in interstellar medium. Recently, we determined the accurate molecular constants of HOCO. Thus, accurate rest frequencies were derived from the constants. In the present study, we carried out the observations of HOCO in the massive star-forming region Sgr B2(M), having variety of interstellar molecules, with Nobeyama 45 m radio telescope. Although HOCO could not be detected in Sgr B2(M), the upper limit of the column density was derived to be 9.0× 10^{12} cm^{-2} via the spectrum in the 88 GHz region by the rotational diagram method. If the reaction (1) is a main process of the glycine production in this region, an extremely deep search is needed to detect glycine. T. Oyama et al., J. Chem. Phys. 134, 174303 (2011).
Accurate calculation of the geometric measure of entanglement for multipartite quantum states
NASA Astrophysics Data System (ADS)
Teng, Peiyuan
2017-07-01
This article proposes an efficient way of calculating the geometric measure of entanglement using tensor decomposition methods. The connection between these two concepts is explored using the tensor representation of the wavefunction. Numerical examples are benchmarked and compared. Furthermore, we search for highly entangled qubit states to show the applicability of this method.
Fluency Heuristic: A Model of How the Mind Exploits a By-Product of Information Retrieval
ERIC Educational Resources Information Center
Hertwig, Ralph; Herzog, Stefan M.; Schooler, Lael J.; Reimer, Torsten
2008-01-01
Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the…
Visual Signaling in a High-Search Virtual World-Based Assessment: A SAVE Science Design Study
ERIC Educational Resources Information Center
Nelson, Brian C.; Kim, Younsu; Slack, Kent
2016-01-01
Education policy in the United States centers K-12 assessment efforts primarily on standardized tests. However, such tests may not provide an accurate and reliable representation of what students understand about the complexity of science. Research indicates that students tend to pass science tests, even if they do not understand the concepts…
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
NASA Technical Reports Server (NTRS)
2000-01-01
The MicroPLB (personal locator beacon) is a search and rescue satellite-aided tracking (SARSAT) transmitter. When activated it emits a distress signal to a constellation of internationally operated satellites. The endangered person's identity and location anywhere on Earth is automatically forwarded to central monitoring stations around the world. It is accurate to within just a few meters. The user uses the device to download navigation data from a global positioning satellite receiver. After the download is complete, the MicroPLB functions as a self-locating beacon. Also, it is the only PLB to use a safe battery. In the past, other PLB devices have used batteries that have enough volatility to explode with extreme force. It was developed by Microwave Monolithic, Inc. through SBIR funding from Glenn Research Center and Goddard Space Flight Center.
Molecular biomarkers for grass pollen immunotherapy
Popescu, Florin-Dan
2014-01-01
Grass pollen allergy represents a significant cause of allergic morbidity worldwide. Component-resolved diagnosis biomarkers are increasingly used in allergy practice in order to evaluate the sensitization to grass pollen allergens, allowing the clinician to confirm genuine sensitization to the corresponding allergen plant sources and supporting an accurate prescription of allergy immunotherapy (AIT), an important approach in many regions of the world with great plant biodiversity and/or where pollen seasons may overlap. The search for candidate predictive biomarkers for grass pollen immunotherapy (tolerogenic dendritic cells and regulatory T cells biomarkers, serum blocking antibodies biomarkers, especially functional ones, immune activation and immune tolerance soluble biomarkers and apoptosis biomarkers) opens new opportunities for the early detection of clinical responders for AIT, for the follow-up of these patients and for the development of new allergy vaccines. PMID:25237628
Vision-based weld pool boundary extraction and width measurement during keyhole fiber laser welding
NASA Astrophysics Data System (ADS)
Luo, Masiyang; Shin, Yung C.
2015-01-01
In keyhole fiber laser welding processes, the weld pool behavior is essential to determining welding quality. To better observe and control the welding process, the accurate extraction of the weld pool boundary as well as the width is required. This work presents a weld pool edge detection technique based on an off axial green illumination laser and a coaxial image capturing system that consists of a CMOS camera and optic filters. According to the difference of image quality, a complete developed edge detection algorithm is proposed based on the local maximum gradient of greyness searching approach and linear interpolation. The extracted weld pool geometry and the width are validated by the actual welding width measurement and predictions by a numerical multi-phase model.
DOE Research and Development Accomplishments Help
be used to search, locate, access, and electronically download full-text research and development (R Browse Downloading, Viewing, and/or Searching Full-text Documents/Pages Searching the Database Search Features Search allows you to search the OCRed full-text document and bibliographic information, the
Rhinoplasty perioperative database using a personal digital assistant.
Kotler, Howard S
2004-01-01
To construct a reliable, accurate, and easy-to-use handheld computer database that facilitates the point-of-care acquisition of perioperative text and image data specific to rhinoplasty. A user-modified database (Pendragon Forms [v.3.2]; Pendragon Software Corporation, Libertyville, Ill) and graphic image program (Tealpaint [v.4.87]; Tealpaint Software, San Rafael, Calif) were used to capture text and image data, respectively, on a Palm OS (v.4.11) handheld operating with 8 megabytes of memory. The handheld and desktop databases were maintained secure using PDASecure (v.2.0) and GoldSecure (v.3.0) (Trust Digital LLC, Fairfax, Va). The handheld data were then uploaded to a desktop database of either FileMaker Pro 5.0 (v.1) (FileMaker Inc, Santa Clara, Calif) or Microsoft Access 2000 (Microsoft Corp, Redmond, Wash). Patient data were collected from 15 patients undergoing rhinoplasty in a private practice outpatient ambulatory setting. Data integrity was assessed after 6 months' disk and hard drive storage. The handheld database was able to facilitate data collection and accurately record, transfer, and reliably maintain perioperative rhinoplasty data. Query capability allowed rapid search using a multitude of keyword search terms specific to the operative maneuvers performed in rhinoplasty. Handheld computer technology provides a method of reliably recording and storing perioperative rhinoplasty information. The handheld computer facilitates the reliable and accurate storage and query of perioperative data, assisting the retrospective review of one's own results and enhancement of surgical skills.
Forecasting influenza outbreak dynamics in Melbourne from Internet search query surveillance data.
Moss, Robert; Zarebski, Alexander; Dawson, Peter; McCaw, James M
2016-07-01
Accurate forecasting of seasonal influenza epidemics is of great concern to healthcare providers in temperate climates, as these epidemics vary substantially in their size, timing and duration from year to year, making it a challenge to deliver timely and proportionate responses. Previous studies have shown that Bayesian estimation techniques can accurately predict when an influenza epidemic will peak many weeks in advance, using existing surveillance data, but these methods must be tailored both to the target population and to the surveillance system. Our aim was to evaluate whether forecasts of similar accuracy could be obtained for metropolitan Melbourne (Australia). We used the bootstrap particle filter and a mechanistic infection model to generate epidemic forecasts for metropolitan Melbourne (Australia) from weekly Internet search query surveillance data reported by Google Flu Trends for 2006-14. Optimal observation models were selected from hundreds of candidates using a novel approach that treats forecasts akin to receiver operating characteristic (ROC) curves. We show that the timing of the epidemic peak can be accurately predicted 4-6 weeks in advance, but that the magnitude of the epidemic peak and the overall burden are much harder to predict. We then discuss how the infection and observation models and the filtering process may be refined to improve forecast robustness, thereby improving the utility of these methods for healthcare decision support. © 2016 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.
USPACOM U.S. Pacific Command Search USPACOM: Search Search Search USPACOM: Search Home Leadership Communication and Information, Student Resiliency and Leadership Development, Partnerships and Collaboration the Hawaii public school system. * Student Resiliency and Leadership Development Strategy Group
Schneider, Janina Anne; Holland, Christopher Patrick
2017-04-13
Patient and consumer access to eHealth information is of crucial importance because of its role in patient-centered medicine and to improve knowledge about general aspects of health and medical topics. The objectives were to analyze and compare eHealth search patterns in a private (United States) and a public (United Kingdom) health care market. A new taxonomy of eHealth websites is proposed to organize the largest eHealth websites. An online measurement framework is developed that provides a precise and detailed measurement system. Online panel data are used to accurately track and analyze detailed search behavior across 100 of the largest eHealth websites in the US and UK health care markets. The health, medical, and lifestyle categories account for approximately 90% of online activity, and e-pharmacies, social media, and professional categories account for the remaining 10% of online activity. Overall search penetration of eHealth websites is significantly higher in the private (United States) than the public market (United Kingdom). Almost twice the number of eHealth users in the private market have adopted online search in the health and lifestyle categories and also spend more time per website than those in the public market. The use of medical websites for specific conditions is almost identical in both markets. The allocation of search effort across categories is similar in both the markets. For all categories, the vast majority of eHealth users only access one website within each category. Those that conduct a search of two or more websites display very narrow search patterns. All users spend relatively little time on eHealth, that is, 3-7 minutes per website. The proposed online measurement framework exploits online panel data to provide a powerful and objective method of analyzing and exploring eHealth behavior. The private health care system does appear to have an influence on eHealth search behavior in terms of search penetration and time spent per website in the health and lifestyle categories. Two explanations are offered: (1) the personal incentive of medical costs in the private market incentivizes users to conduct online search; and (2) health care information is more easily accessible through health care professionals in the United Kingdom compared with the United States. However, the use of medical websites is almost identical, suggesting that patients interested in a specific condition have a motivation to search and evaluate health information, irrespective of the health care market. The relatively low level of search in terms of the number of websites accessed and the average time per website raise important questions about the actual level of patient informedness in both the markets. Areas for future research are outlined. ©Janina Anne Schneider, Christopher Patrick Holland. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 13.04.2017.
Gourgiotis, Stavros; Kocher, Hemant M; Solaini, Leonardo; Yarollahi, Arvin; Tsiambas, Evangelos; Salemis, Nikolaos S
2008-08-01
Gallbladder cancer (GC) is a relatively rare but highly lethal neoplasm. We review the epidemiology, etiology, pathology, symptoms, diagnosis, staging, treatment, and prognosis of GC. A Pubmed database search between 1971 and February 2007 was performed. All abstracts were reviewed and articles with GC obtained; further references were extracted by hand-searching the bibliography. The database search was done in the English language. The accurate etiology of GC remains unclear, while the symptoms associated with primary GC are not specific. Treatment with radical cholecystectomy is curative but possible in only 10% to 30% of patients. For patients whose cancer is an incidental finding on pathologic review, re-resection is indicated, where feasible, for all disease except T1a. Patients with advanced disease should receive palliative treatment. Laparoscopic cholecystectomy is contraindicated in the presence of GC. Prognosis generally is extremely poor. Improvements in the outcome of surgical resection have caused this approach to be re-evaluated, while the role of chemotherapy and radiotherapy remains controversial.
An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.
Yang, Yifei; Tan, Minjia; Dai, Yuewei
2017-01-01
A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.
Transient classification in LIGO data using difference boosting neural network
NASA Astrophysics Data System (ADS)
Mukund, N.; Abraham, S.; Kandhasamy, S.; Mitra, S.; Philip, N. S.
2017-05-01
Detection and classification of transients in data from gravitational wave detectors are crucial for efficient searches for true astrophysical events and identification of noise sources. We present a hybrid method for classification of short duration transients seen in gravitational wave data using both supervised and unsupervised machine learning techniques. To train the classifiers, we use the relative wavelet energy and the corresponding entropy obtained by applying one-dimensional wavelet decomposition on the data. The prediction accuracy of the trained classifier on nine simulated classes of gravitational wave transients and also LIGO's sixth science run hardware injections are reported. Targeted searches for a couple of known classes of nonastrophysical signals in the first observational run of Advanced LIGO data are also presented. The ability to accurately identify transient classes using minimal training samples makes the proposed method a useful tool for LIGO detector characterization as well as searches for short duration gravitational wave signals.
Forecasting new product diffusion using both patent citation and web search traffic.
Lee, Won Sang; Choi, Hyo Shin; Sohn, So Young
2018-01-01
Accurate demand forecasting for new technology products is a key factor in the success of a business. We propose a way to forecasting a new product's diffusion through technology diffusion and interest diffusion. Technology diffusion and interest diffusion are measured by the volume of patent citations and web search traffic, respectively. We apply the proposed method to forecast the sales of hybrid cars and industrial robots in the US market. The results show that that technology diffusion, as represented by patent citations, can explain long-term sales for hybrid cars and industrial robots. On the other hand, interest diffusion, as represented by web search traffic, can help to improve the predictability of market sales of hybrid cars in the short-term. However, interest diffusion is difficult to explain the sales of industrial robots due to the different market characteristics. Finding indicates our proposed model can relatively well explain the diffusion of consumer goods.
NASA Astrophysics Data System (ADS)
van Setten, M. J.; Giantomassi, M.; Gonze, X.; Rignanese, G.-M.; Hautier, G.
2017-10-01
The search for new materials based on computational screening relies on methods that accurately predict, in an automatic manner, total energy, atomic-scale geometries, and other fundamental characteristics of materials. Many technologically important material properties directly stem from the electronic structure of a material, but the usual workhorse for total energies, namely density-functional theory, is plagued by fundamental shortcomings and errors from approximate exchange-correlation functionals in its prediction of the electronic structure. At variance, the G W method is currently the state-of-the-art ab initio approach for accurate electronic structure. It is mostly used to perturbatively correct density-functional theory results, but is, however, computationally demanding and also requires expert knowledge to give accurate results. Accordingly, it is not presently used in high-throughput screening: fully automatized algorithms for setting up the calculations and determining convergence are lacking. In this paper, we develop such a method and, as a first application, use it to validate the accuracy of G0W0 using the PBE starting point and the Godby-Needs plasmon-pole model (G0W0GN @PBE) on a set of about 80 solids. The results of the automatic convergence study utilized provide valuable insights. Indeed, we find correlations between computational parameters that can be used to further improve the automatization of G W calculations. Moreover, we find that G0W0GN @PBE shows a correlation between the PBE and the G0W0GN @PBE gaps that is much stronger than that between G W and experimental gaps. However, the G0W0GN @PBE gaps still describe the experimental gaps more accurately than a linear model based on the PBE gaps. With this paper, we hence show that G W can be made automatic and is more accurate than using an empirical correction of the PBE gap, but that, for accurate predictive results for a broad class of materials, an improved starting point or some type of self-consistency is necessary.
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb-Robertson, Bobbie-Jo M.
Accurate identification of peptides is a current challenge in mass spectrometry (MS) based proteomics. The standard approach uses a search routine to compare tandem mass spectra to a database of peptides associated with the target organism. These database search routines yield multiple metrics associated with the quality of the mapping of the experimental spectrum to the theoretical spectrum of a peptide. The structure of these results make separating correct from false identifications difficult and has created a false identification problem. Statistical confidence scores are an approach to battle this false positive problem that has led to significant improvements in peptidemore » identification. We have shown that machine learning, specifically support vector machine (SVM), is an effective approach to separating true peptide identifications from false ones. The SVM-based peptide statistical scoring method transforms a peptide into a vector representation based on database search metrics to train and validate the SVM. In practice, following the database search routine, a peptides is denoted in its vector representation and the SVM generates a single statistical score that is then used to classify presence or absence in the sample« less
Rendic, Slobodan P; Guengerich, Frederick P
2018-01-01
The present work describes development of offline and web-searchable metabolism databases for drugs, other chemicals, and physiological compounds using human and model species, prompted by the large amount of data published after year 1990. The intent was to provide a rapid and accurate approach to published data to be applied both in science and to assist therapy. Searches for the data were done using the Pub Med database, accessing the Medline database of references and abstracts. In addition, data presented at scientific conferences (e.g., ISSX conferences) are included covering the publishing period beginning with the year 1976. Application of the data is illustrated by the properties of benzo[a]pyrene (B[a]P) and its metabolites. Analysis show higher activity of P450 1A1 for activation of the (-)- isomer of trans-B[a]P-7,8-diol, while P4501B1 exerts higher activity for the (+)- isomer. P450 1A2 showed equally low activity in the metabolic activation of both isomers. The information collected in the databases is applicable in prediction of metabolic drug-drug and/or drug-chemical interactions in clinical and environmental studies. The data on the metabolism of searched compound (exemplified by benzo[a]pyrene and its metabolites) also indicate toxicological properties of the products of specific reactions. The offline and web-searchable databases had wide range of applications (e.g. computer assisted drug design and development, optimization of clinical therapy, toxicological applications) and adjustment in everyday life styles. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Mahar, Alyson L.; Compton, Carolyn; McShane, Lisa M.; Halabi, Susan; Asamura, Hisao; Rami-Porta, Ramon; Groome, Patti A.
2015-01-01
Introduction Accurate, individualized prognostication for lung cancer patients requires the integration of standard patient and pathologic factors, biologic, genetic, and other molecular characteristics of the tumor. Clinical prognostic tools aim to aggregate information on an individual patient to predict disease outcomes such as overall survival, but little is known about their clinical utility and accuracy in lung cancer. Methods A systematic search of the scientific literature for clinical prognostic tools in lung cancer published Jan 1, 1996-Jan 27, 2015 was performed. In addition, web-based resources were searched. A priori criteria determined by the Molecular Modellers Working Group of the American Joint Committee on Cancer were used to investigate the quality and usefulness of tools. Criteria included clinical presentation, model development approaches, validation strategies, and performance metrics. Results Thirty-two prognostic tools were identified. Patients with metastases were the most frequently considered population in non-small cell lung cancer. All tools for small cell lung cancer covered that entire patient population. Included prognostic factors varied considerably across tools. Internal validity was not formally evaluated for most tools and only eleven were evaluated for external validity. Two key considerations were highlighted for tool development: identification of an explicit purpose related to a relevant clinical population and clear decision-points, and prioritized inclusion of established prognostic factors over emerging factors. Conclusions Prognostic tools will contribute more meaningfully to the practice of personalized medicine if better study design and analysis approaches are used in their development and validation. PMID:26313682
Smeulers, Marian; Lucas, Cees; Vermeulen, Hester
2014-06-24
An accurate handover of clinical information is of great importance to continuity and safety of care. If clinically relevant information is not shared accurately and in a timely manner it may lead to adverse events, delays in treatment and diagnosis, inappropriate treatment and omission of care. During the last decade the call for interventions to improve handovers has increased. These interventions aim to reduce the risk of miscommunication, misunderstanding and the omission of critical information. To determine the effectiveness of interventions designed to improve hospital nursing handover, specifically:to identify which nursing handover style(s) are associated with improved outcomes for patients in the hospital setting and which nursing handover style(s) are associated with improved nursing process outcomes. We searched the following electronic databases for primary studies: Cochrane EPOC Group specialised register (to 19 September 2012), Cochrane Central Register of Controlled Trials (CENTRAL) (to 1 March 2013), MEDLINE (1950 to 1 March 2013) OvidSP, EMBASE (1947 to 1 March 2013) OvidSP, CINAHL (Cumulative Index to Nursing and Allied Health Literature) (1980 to 1 March 2013) EbscoHost and ISI Web of Knowledge (Science Citation Index and Social Sciences Citation Index) (to 9 July 2012). The Database of Abstracts of Reviews (DARE) was searched for related reviews. We screened the reference lists of included studies and relevant reviews. We also searched the WHO International Clinical Trials Registry Platform (ICTRP) http://www.who.int/ictrp/en/ and Current Controlled Trials www.controlled-trials.com/mrct and we conducted a search of grey literature web sites. Randomised controlled trials (RCTs or cluster-RCTs) evaluating any nursing handover style between nurses in a hospital setting with the aim of preventing adverse events or optimising the transfer of accurate essential information required for continuity of care, or both. Two review authors independently assessed trial quality and extracted data. The search identified 2178 citations, 28 of which were considered potentially relevant. After independent review of the full text of these studies, no eligible studies were identified for inclusion in this review due to the absence of studies with a randomised controlled study design. There was no evidence available to support conclusions about the effectiveness of nursing handover styles for ensuring continuity of information in hospitalised patients because we found no studies that fulfilled the methodological criteria for this review. As a consequence, uncertainty about the most effective practice remains. Research efforts should focus on strengthening the evidence abut the effectiveness of nursing handover styles using well designed, rigorous studies. According to current knowledge, the following guiding principles can be applied when redesigning the nursing handover process: face-to-face communication, structured documentation, patient involvement and use of IT technology to support the process.
Routine development of objectively derived search strategies.
Hausner, Elke; Waffenschmidt, Siw; Kaiser, Thomas; Simon, Michael
2012-02-29
Over the past few years, information retrieval has become more and more professionalized, and information specialists are considered full members of a research team conducting systematic reviews. Research groups preparing systematic reviews and clinical practice guidelines have been the driving force in the development of search strategies, but open questions remain regarding the transparency of the development process and the available resources. An empirically guided approach to the development of a search strategy provides a way to increase transparency and efficiency. Our aim in this paper is to describe the empirically guided development process for search strategies as applied by the German Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, or "IQWiG"). This strategy consists of the following steps: generation of a test set, as well as the development, validation and standardized documentation of the search strategy. We illustrate our approach by means of an example, that is, a search for literature on brachytherapy in patients with prostate cancer. For this purpose, a test set was generated, including a total of 38 references from 3 systematic reviews. The development set for the generation of the strategy included 25 references. After application of textual analytic procedures, a strategy was developed that included all references in the development set. To test the search strategy on an independent set of references, the remaining 13 references in the test set (the validation set) were used. The validation set was also completely identified. Our conclusion is that an objectively derived approach similar to that used in search filter development is a feasible way to develop and validate reliable search strategies. Besides creating high-quality strategies, the widespread application of this approach will result in a substantial increase in the transparency of the development process of search strategies.
Automated extraction of chemical structure information from digital raster images
Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro
2009-01-01
Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research articles. PMID:19196483
Deurenberg, Rikie; Vlayen, Joan; Guillo, Sylvie; Oliver, Thomas K; Fervers, Beatrice; Burgers, Jako
2008-03-01
Effective literature searching is particularly important for clinical practice guideline development. Sophisticated searching and filtering mechanisms are needed to help ensure that all relevant research is reviewed. To assess the methods used for the selection of evidence for guideline development by evidence-based guideline development organizations. A semistructured questionnaire assessing the databases, search filters and evaluation methods used for literature retrieval was distributed to eight major organizations involved in evidence-based guideline development. All of the organizations used search filters as part of guideline development. The medline database was the primary source accessed for literature retrieval. The OVID or SilverPlatter interfaces were used in preference to the freely accessed PubMed interface. The Cochrane Library, embase, cinahl and psycinfo databases were also frequently used by the organizations. All organizations reported the intention to improve and validate their filters for finding literature specifically relevant for guidelines. In the first international survey of its kind, eight major guideline development organizations indicated a strong interest in identifying, improving and standardizing search filters to improve guideline development. It is to be hoped that this will result in the standardization of, and open access to, search filters, an improvement in literature searching outcomes and greater collaboration among guideline development organizations.
In Search of Black Swans: Identifying Students at Risk of Failing Licensing Examinations.
Barber, Cassandra; Hammond, Robert; Gula, Lorne; Tithecott, Gary; Chahine, Saad
2018-03-01
To determine which admissions variables and curricular outcomes are predictive of being at risk of failing the Medical Council of Canada Qualifying Examination Part 1 (MCCQE1), how quickly student risk of failure can be predicted, and to what extent predictive modeling is possible and accurate in estimating future student risk. Data from five graduating cohorts (2011-2015), Schulich School of Medicine & Dentistry, Western University, were collected and analyzed using hierarchical generalized linear models (HGLMs). Area under the receiver operating characteristic curve (AUC) was used to evaluate the accuracy of predictive models and determine whether they could be used to predict future risk, using the 2016 graduating cohort. Four predictive models were developed to predict student risk of failure at admissions, year 1, year 2, and pre-MCCQE1. The HGLM analyses identified gender, MCAT verbal reasoning score, two preclerkship course mean grades, and the year 4 summative objective structured clinical examination score as significant predictors of student risk. The predictive accuracy of the models varied. The pre-MCCQE1 model was the most accurate at predicting a student's risk of failing (AUC 0.66-0.93), while the admissions model was not predictive (AUC 0.25-0.47). Key variables predictive of students at risk were found. The predictive models developed suggest, while it is not possible to identify student risk at admission, we can begin to identify and monitor students within the first year. Using such models, programs may be able to identify and monitor students at risk quantitatively and develop tailored intervention strategies.
NASA Astrophysics Data System (ADS)
Klaessens, John H.; van den Born, Marlies; van der Veen, Albert; Sikkens-van de Kraats, Janine; van den Dungen, Frank A.; Verdaasdonk, Rudolf M.
2014-02-01
For infants and neonates in an incubator vital signs, such as heart rate, breathing, skin temperature and blood oxygen saturation are measured by sensors and electrodes sticking to the skin. This can damage the vulnerable skin of neonates and cause infections. In addition, the wires interfere with the care and hinder the parents in holding and touching the baby. These problems initiated the search for baby friendly 'non-contact' measurement of vital signs. Using a sensitive color video camera and specially developed software, the heart rate was derived from subtle repetitive color changes. Potentially also respiration and oxygen saturation could be obtained. A thermal camera was used to monitor the temperature distribution of the whole body and detect small temperature variations around the nose revealing the respiration rate. After testing in the laboratory, seven babies were monitored (with parental consent) in the neonatal intensive care unit (NICU) simultaneously with the regular monitoring equipment. From the color video recordings accurate heart rates could be derived and the thermal images provided accurate respiration rates. To correct for the movements of the baby, tracking software could be applied. At present, the image processing was performed off-line. Using narrow band light sources also non-contact blood oxygen saturation could be measured. Non-contact monitoring of vital signs has proven to be feasible and can be developed into a real time system. Besides the application on the NICU non-contact vital function monitoring has large potential for other patient groups.
ECONOMICS AND THE SEARCH FOR OFFSHORE HEAVY MINERAL DEPOSITS.
Attanasi, E.D.; DeYoung, J.H.
1987-01-01
This paper examines the relative importance, in terms of a deposit's commercial status, of physical characteristics of onshore titanium-bearing heavy-mineral placer deposits, and applies these findings to the search for and evaluation of offshore deposits. Results obtained by applying statistical discriminant analysis show that the characteristics most useful for predicting a deposit's commercial status are the grades of the constituent titanium minerals and the size of the deposit. Heavy-mineral grade or even the combined grades of all titanium-bearing minerals (without information and constituent mineral grades) are inferior predictors of a deposit's commercial status. When data from homogeneous regions are analyzed separately, the ability to accurately predict the deposit's commerical status improves.
Davis, Eric; Devlin, Sean; Cooper, Candice; Nhaissi, Melissa; Paulson, Jennifer; Wells, Deborah; Scaradavou, Andromachi; Giralt, Sergio; Papadopoulos, Esperanza; Kernan, Nancy A; Byam, Courtney; Barker, Juliet N
2018-05-01
A strategy to rapidly determine if a matched unrelated donor (URD) can be secured for allograft recipients is needed. We sought to validate the accuracy of (1) HapLogic match predictions and (2) a resultant novel Search Prognosis (SP) patient categorization that could predict 8/8 HLA-matched URD(s) likelihood at search initiation. Patient prognosis categories at search initiation were correlated with URD confirmatory typing results. HapLogic-based SP categorizations accurately predicted the likelihood of an 8/8 HLA-match in 830 patients (1530 donors tested). Sixty percent of patients had 8/8 URD(s) identified. Patient SP categories (217 very good, 104 good, 178 fair, 33 poor, 153 very poor, 145 futile) were associated with a marked progressive decrease in 8/8 URD identification and transplantation. Very good to good categories were highly predictive of identifying and receiving an 8/8 URD regardless of ancestry. Europeans in fair/poor categories were more likely to identify and receive an 8/8 URD compared with non-Europeans. In all ancestries very poor and futile categories predicted no 8/8 URDs. HapLogic permits URD search results to be predicted once patient HLA typing and ancestry is obtained, dramatically improving search efficiency. Poor, very poor, andfutile searches can be immediately recognized, thereby facilitating prompt pursuit of alternative donors. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Liu, Chang
2012-01-01
When using information retrieval (IR) systems, users often pose short and ambiguous query terms. It is critical for IR systems to obtain more accurate representation of users' information need, their document preferences, and the context they are working in, and then incorporate them into the design of the systems to tailor retrieval to…
ERIC Educational Resources Information Center
Bliss, Kadi; Lodyga, Marc; Bochantin, Shelley; Null, Dawn
2010-01-01
A relatively new mobile text message service, "ChaCha," describes itself as "a smart search engine powered by human intelligence." The service claims to provide high-quality, accurate information, yet there is no research published to date substantiating this claim. The purpose of this study was to assess the extent to which health and…
A New Generation of Evidence: The Family is Critical to Student Achievement.
ERIC Educational Resources Information Center
Henderson, Anne T., Ed.; Berla, Nancy, Ed.
This report covers 66 studies, reviews, reports, analyses, and books. Of these 39 are new; 27 have been carried over from previous editions. An ERIC search was conducted to identify relevant studies. Noting that the most accurate predictor of student achievement is the extent to which the family is involved in his or her education, this report…
Predicting Airport Screening Officers' Visual Search Competency With a Rapid Assessment.
Mitroff, Stephen R; Ericson, Justin M; Sharpe, Benjamin
2018-03-01
Objective The study's objective was to assess a new personnel selection and assessment tool for aviation security screeners. A mobile app was modified to create a tool, and the question was whether it could predict professional screeners' on-job performance. Background A variety of professions (airport security, radiology, the military, etc.) rely on visual search performance-being able to detect targets. Given the importance of such professions, it is necessary to maximize performance, and one means to do so is to select individuals who excel at visual search. A critical question is whether it is possible to predict search competency within a professional search environment. Method Professional searchers from the USA Transportation Security Administration (TSA) completed a rapid assessment on a tablet-based X-ray simulator (XRAY Screener, derived from the mobile technology app Airport Scanner; Kedlin Company). The assessment contained 72 trials that were simulated X-ray images of bags. Participants searched for prohibited items and tapped on them with their finger. Results Performance on the assessment significantly related to on-job performance measures for the TSA officers such that those who were better XRAY Screener performers were both more accurate and faster at the actual airport checkpoint. Conclusion XRAY Screener successfully predicted on-job performance for professional aviation security officers. While questions remain about the underlying cognitive mechanisms, this quick assessment was found to significantly predict on-job success for a task that relies on visual search performance. Application It may be possible to quickly assess an individual's visual search competency, which could help organizations select new hires and assess their current workforce.
The coupling between gaze behavior and opponent kinematics during anticipation of badminton shots.
Alder, David; Ford, Paul R; Causer, Joe; Williams, A Mark
2014-10-01
We examined links between the kinematics of an opponent's actions and the visual search behaviors of badminton players responding to those actions. A kinematic analysis of international standard badminton players (n = 4) was undertaken as they completed a range of serves. Video of these players serving was used to create a life-size temporal occlusion test to measure anticipation responses. Expert (n = 8) and novice (n = 8) badminton players anticipated serve location while wearing an eye movement registration system. During the execution phase of the opponent's movement, the kinematic analysis showed between-shot differences in distance traveled and peak acceleration at the shoulder, elbow, wrist and racket. Experts were more accurate at responding to the serves compared to novice players. Expert players fixated on the kinematic locations that were most discriminating between serve types more frequently and for a longer duration compared to novice players. Moreover, players were generally more accurate at responding to serves when they fixated vision upon the discriminating arm and racket kinematics. Findings extend previous literature by providing empirical evidence that expert athletes' visual search behaviors and anticipatory responses are inextricably linked to the opponent action being observed. Copyright © 2014 Elsevier B.V. All rights reserved.
Annotation: a computational solution for streamlining metabolomics analysis
Domingo-Almenara, Xavier; Montenegro-Burke, J. Rafael; Benton, H. Paul; Siuzdak, Gary
2017-01-01
Metabolite identification is still considered an imposing bottleneck in liquid chromatography mass spectrometry (LC/MS) untargeted metabolomics. The identification workflow usually begins with detecting relevant LC/MS peaks via peak-picking algorithms and retrieving putative identities based on accurate mass searching. However, accurate mass search alone provides poor evidence for metabolite identification. For this reason, computational annotation is used to reveal the underlying metabolites monoisotopic masses, improving putative identification in addition to confirmation with tandem mass spectrometry. This review examines LC/MS data from a computational and analytical perspective, focusing on the occurrence of neutral losses and in-source fragments, to understand the challenges in computational annotation methodologies. Herein, we examine the state-of-the-art strategies for computational annotation including: (i) peak grouping or full scan (MS1) pseudo-spectra extraction, i.e., clustering all mass spectral signals stemming from each metabolite; (ii) annotation using ion adduction and mass distance among ion peaks; (iii) incorporation of biological knowledge such as biotransformations or pathways; (iv) tandem MS data; and (v) metabolite retention time calibration, usually achieved by prediction from molecular descriptors. Advantages and pitfalls of each of these strategies are discussed, as well as expected future trends in computational annotation. PMID:29039932
[Use of PubMed to improve evidence-based medicine in routine urological practice].
Rink, M; Kluth, L A; Shariat, S F; Chun, F K; Fisch, M; Dahm, P
2013-03-01
Applying evidence-based medicine in daily clinical practice is the basis of patient-centered medicine and knowledge of accurate literature acquisition skills is necessary for informed clinical decision-making. PubMed is an easy accessible, free bibliographic database comprising over 21 million citations from the medical field, life-science journals and online books. The article summarizes the effective use of PubMed in routine urological clinical practice based on a common case scenario. This article explains the simple use of PubMed to obtain the best search results with the highest evidence. Accurate knowledge about the use of PubMed in routine clinical practice can improve evidence-based medicine and also patient treatment.
Eyeglasses Lens Contour Extraction from Facial Images Using an Efficient Shape Description
Borza, Diana; Darabant, Adrian Sergiu; Danescu, Radu
2013-01-01
This paper presents a system that automatically extracts the position of the eyeglasses and the accurate shape and size of the frame lenses in facial images. The novelty brought by this paper consists in three key contributions. The first one is an original model for representing the shape of the eyeglasses lens, using Fourier descriptors. The second one is a method for generating the search space starting from a finite, relatively small number of representative lens shapes based on Fourier morphing. Finally, we propose an accurate lens contour extraction algorithm using a multi-stage Monte Carlo sampling technique. Multiple experiments demonstrate the effectiveness of our approach. PMID:24152926
Earthquake prediction: the interaction of public policy and science.
Jones, L M
1996-01-01
Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake. PMID:11607656
Stansfield, Claire; O'Mara-Eves, Alison; Thomas, James
2017-09-01
Using text mining to aid the development of database search strings for topics described by diverse terminology has potential benefits for systematic reviews; however, methods and tools for accomplishing this are poorly covered in the research methods literature. We briefly review the literature on applications of text mining for search term development for systematic reviewing. We found that the tools can be used in 5 overarching ways: improving the precision of searches; identifying search terms to improve search sensitivity; aiding the translation of search strategies across databases; searching and screening within an integrated system; and developing objectively derived search strategies. Using a case study and selected examples, we then reflect on the utility of certain technologies (term frequency-inverse document frequency and Termine, term frequency, and clustering) in improving the precision and sensitivity of searches. Challenges in using these tools are discussed. The utility of these tools is influenced by the different capabilities of the tools, the way the tools are used, and the text that is analysed. Increased awareness of how the tools perform facilitates the further development of methods for their use in systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.
Matsuda, Fumio; Shinbo, Yoko; Oikawa, Akira; Hirai, Masami Yokota; Fiehn, Oliver; Kanaya, Shigehiko; Saito, Kazuki
2009-01-01
Background In metabolomics researches using mass spectrometry (MS), systematic searching of high-resolution mass data against compound databases is often the first step of metabolite annotation to determine elemental compositions possessing similar theoretical mass numbers. However, incorrect hits derived from errors in mass analyses will be included in the results of elemental composition searches. To assess the quality of peak annotation information, a novel methodology for false discovery rates (FDR) evaluation is presented in this study. Based on the FDR analyses, several aspects of an elemental composition search, including setting a threshold, estimating FDR, and the types of elemental composition databases most reliable for searching are discussed. Methodology/Principal Findings The FDR can be determined from one measured value (i.e., the hit rate for search queries) and four parameters determined by Monte Carlo simulation. The results indicate that relatively high FDR values (30–50%) were obtained when searching time-of-flight (TOF)/MS data using the KNApSAcK and KEGG databases. In addition, searches against large all-in-one databases (e.g., PubChem) always produced unacceptable results (FDR >70%). The estimated FDRs suggest that the quality of search results can be improved not only by performing more accurate mass analysis but also by modifying the properties of the compound database. A theoretical analysis indicates that FDR could be improved by using compound database with smaller but higher completeness entries. Conclusions/Significance High accuracy mass analysis, such as Fourier transform (FT)-MS, is needed for reliable annotation (FDR <10%). In addition, a small, customized compound database is preferable for high-quality annotation of metabolome data. PMID:19847304
Large-scale Cross-modality Search via Collective Matrix Factorization Hashing.
Ding, Guiguang; Guo, Yuchen; Zhou, Jile; Gao, Yue
2016-09-08
By transforming data into binary representation, i.e., Hashing, we can perform high-speed search with low storage cost, and thus Hashing has collected increasing research interest in the recent years. Recently, how to generate Hashcode for multimodal data (e.g., images with textual tags, documents with photos, etc) for large-scale cross-modality search (e.g., searching semantically related images in database for a document query) is an important research issue because of the fast growth of multimodal data in the Web. To address this issue, a novel framework for multimodal Hashing is proposed, termed as Collective Matrix Factorization Hashing (CMFH). The key idea of CMFH is to learn unified Hashcodes for different modalities of one multimodal instance in the shared latent semantic space in which different modalities can be effectively connected. Therefore, accurate cross-modality search is supported. Based on the general framework, we extend it in the unsupervised scenario where it tries to preserve the Euclidean structure, and in the supervised scenario where it fully exploits the label information of data. The corresponding theoretical analysis and the optimization algorithms are given. We conducted comprehensive experiments on three benchmark datasets for cross-modality search. The experimental results demonstrate that CMFH can significantly outperform several state-of-the-art cross-modality Hashing methods, which validates the effectiveness of the proposed CMFH.
Automating Information Discovery Within the Invisible Web
NASA Astrophysics Data System (ADS)
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
Health search engine with e-document analysis for reliable search results.
Gaudinat, Arnaud; Ruch, Patrick; Joubert, Michel; Uziel, Philippe; Strauss, Anne; Thonnet, Michèle; Baud, Robert; Spahni, Stéphane; Weber, Patrick; Bonal, Juan; Boyer, Celia; Fieschi, Marius; Geissbuhler, Antoine
2006-01-01
After a review of the existing practical solution available to the citizen to retrieve eHealth document, the paper describes an original specialized search engine WRAPIN. WRAPIN uses advanced cross lingual information retrieval technologies to check information quality by synthesizing medical concepts, conclusions and references contained in the health literature, to identify accurate, relevant sources. Thanks to MeSH terminology [1] (Medical Subject Headings from the U.S. National Library of Medicine) and advanced approaches such as conclusion extraction from structured document, reformulation of the query, WRAPIN offers to the user a privileged access to navigate through multilingual documents without language or medical prerequisites. The results of an evaluation conducted on the WRAPIN prototype show that results of the WRAPIN search engine are perceived as informative 65% (59% for a general-purpose search engine), reliable and trustworthy 72% (41% for the other engine) by users. But it leaves room for improvement such as the increase of database coverage, the explanation of the original functionalities and an audience adaptability. Thanks to evaluation outcomes, WRAPIN is now in exploitation on the HON web site (http://www.healthonnet.org), free of charge. Intended to the citizen it is a good alternative to general-purpose search engines when the user looks up trustworthy health and medical information or wants to check automatically a doubtful content of a Web page.
Prospective regularization design in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2015-12-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.
Heart Disease and Depression: Is Culture a Factor?
Gholizadeh, Leila; Davidson, Patricia M; Heydari, Mehrdad; Salamonson, Yenna
2014-07-01
This article seeks to review and discuss the evidence linking depression, coronary heart disease (CHD), and culture. PsychInfo, CINAHL, PubMed, and Google were searched for pertinent evidence linking depression, culture, and CHD, and retrieved articles were analyzed using thematic content analysis. Identified themes were the followings: depression is a factor in development and prognosis of CHD and affects the capacity to self-manage and adhere to treatment recommendations; culture mediates mental health/illness representations and treatment-seeking behaviors; screening and assessment of depression can be affected by cultural factors; and there is a need for culturally appropriate screening and therapeutic strategies. As depression is a predictor and moderating variable in the genesis and progression of CHD, understanding how factors such as culture affect screening and management of the disease is important to inform the development of culturally and linguistically competent strategies that ensure accurate screening, detection, and treatment of depression in cardiac patients in clinical practice. © The Author(s) 2014.
Holsti, Liisa; Grunau, Ruth E
2007-06-01
Accurate assessment and treatment of pain and stress in preterm infants in neonatal intensive care units (NICU) is vital because pain and stress responses have been linked to long-term alterations in development in this population. To review the evidence of specific extremity movements in preterm infants as observed during stressful procedures. Five on-line databases were searched for relevant studies. For each study, levels of evidence were determined and effect size estimates were calculated. Each study was also evaluated for specific factors that presented potential threats to its validity. Eighteen studies were identified and seven comprised the review. The combined sample included 359 preterm infants. Six specific movements were associated with painful and intrusive procedures. A set of specific extremity movements, when combined with other reliable biobehavioural measures of pain and stress, can form the basis for future research and development of a clinical stress scale for preterm infants.
Liau, Siow Yen; Mohamed Izham, M I; Hassali, M A; Shafie, A A
2010-01-01
Cardiovascular diseases, the main causes of hospitalisations and death globally, have put an enormous economic burden on the healthcare system. Several risk factors are associated with the occurrence of cardiovascular events. At the heart of efficient prevention of cardiovascular disease is the concept of risk assessment. This paper aims to review the available cardiovascular risk-assessment tools and its applicability in predicting cardiovascular risk among Asian populations. A systematic search was performed using keywords as MeSH and Boolean terms. A total of 25 risk-assessment tools were identified. Of these, only two risk-assessment tools (8%) were derived from an Asian population. These risk-assessment tools differ in various ways, including characteristics of the derivation sample, type of study, time frame of follow-up, end points, statistical analysis and risk factors included. Very few cardiovascular risk-assessment tools were developed in Asian populations. In order to accurately predict the cardiovascular risk of our population, there is a need to develop a risk-assessment tool based on local epidemiological data.
COROT mission: accurate stellar photometry
NASA Astrophysics Data System (ADS)
Costes, Vincent; Bodin, Pierre; Levacher, Patrick; Auvergne, Michel
2004-06-01
The COROT mission is dedicated to stellar seismology and search for telluric extra-solar planets. The development is led by CNES in association with French laboratories (LESIA, LAM and IAS) and several European partners (Germany, Belgium, Austria, Spain, ESA and Brasilia). The COROT seismology program will measure periodic variations with amplitude of 2.10 -6 of the photon flux emitted by bright stars. The COROT exoplanet program will detect the presence of exoplanets using the radiometric occultation method. The need is to detect photons flux variations about 7×10-4 for one hour integration time. Such performance will permit to detect occultations on a very large number of stars: magnitude between 12 and 15.5. The satellite Preliminary Design Review has been held on January 2004 while the instrument is already in development phase with a Critical Design Review in April 2004 and a delivery of the flight model in March 2005. The launch is scheduled in June 2006. This paper recalls the mission, describes the payload and its main noise performances.
Preparation and application of functionalized nano drug carriers.
Gong, Rudong; Chen, Gaimin
2016-05-01
Targeting at category memory characteristics and preparation methods of functionalized nano drugs, preparation technology of functionalized nano drug carriers is studied, and then important role of functionalized nano drug carrier in preparation of medicine is studied. Carry out the relevant literature search with computer, change limited language in the paper to Chinese and necessarily remove repetitive studies. After first review of 1260 retrieved literature, it can be found that nano drug is with accurate quantity, relatively good targeting, specificity and absorbency. Necessary research of nano drug carriers can prevent and treat disease to a certain extent. Preparation of functionalized nanocarrier is simple and convenient, which can improve frequency of use of nano preparation technology and provide better development space for medical use. Therefore, nanocarriers should be combined with drugs with relatively strong specificity in clinics, in order to be able to conduct effective research on nanometer intelligent drug, effectively promote long-term development of nano biotechnology, and then provide favorable, reliable basis for clinical diagnosis and treatment.
NASA Astrophysics Data System (ADS)
El-Jaat, Majda; Hulley, Michael; Tétreault, Michel
2018-02-01
Despite the broad impact and importance of saltwater intrusion in coastal aquifers, little research has been directed towards forecasting saltwater intrusion in areas where the source of saltwater is uncertain. Saline contamination in inland groundwater supplies is a concern for numerous communities in the southern US including the city of Deltona, Florida. Furthermore, conventional numerical tools for forecasting saltwater contamination are heavily dependent on reliable characterization of the physical characteristics of underlying aquifers, information that is often absent or challenging to obtain. To overcome these limitations, a reliable alternative data-driven model for forecasting salinity in a groundwater supply was developed for Deltona using the fast orthogonal search (FOS) method. FOS was applied on monthly water-demand data and corresponding chloride concentrations at water supply wells. Groundwater salinity measurements from Deltona water supply wells were applied to evaluate the forecasting capability and accuracy of the FOS model. Accurate and reliable groundwater salinity forecasting is necessary to support effective and sustainable coastal-water resource planning and management. The available (27) water supply wells for Deltona were randomly split into three test groups for the purposes of FOS model development and performance assessment. Based on four performance indices (RMSE, RSR, NSEC, and R), the FOS model proved to be a reliable and robust forecaster of groundwater salinity. FOS is relatively inexpensive to apply, is not based on rigorous physical characterization of the water supply aquifer, and yields reliable estimates of groundwater salinity in active water supply wells.
Body image disturbance in adults treated for cancer - a concept analysis.
Rhoten, Bethany A
2016-05-01
To report an analysis of the concept of body image disturbance in adults who have been treated for cancer as a phenomenon of interest to nurses. Although the concept of body image disturbance has been clearly defined in adolescents and adults with eating disorders, adults who have been treated for cancer may also experience body image disturbance. In this context, the concept of body image disturbance has not been clearly defined. Concept analysis. PubMed, Psychological Information Database and Cumulative Index of Nursing and Allied Health Literature were searched for publications from 1937 - 2015. Search terms included body image, cancer, body image disturbance, adult and concept analysis. Walker and Avant's 8-step method of concept analysis was used. The defining attributes of body image disturbance in adults who have been treated for cancer are: (1) self-perception of a change in appearance and displeasure with the change or perceived change in appearance; (2) decline in an area of function; and (3) psychological distress regarding changes in appearance and/or function. This concept analysis provides a foundation for the development of multidimensional assessment tools and interventions to alleviate body image disturbance in this population. A better understanding of body image disturbance in adults treated for cancer will assist nurses and other clinicians in identifying this phenomenon and nurse scientists in developing instruments that accurately measure this condition, along with interventions that will promote a better quality of life for survivors. © 2016 John Wiley & Sons Ltd.
The use of 3D surface scanning for the measurement and assessment of the human foot
2010-01-01
Background A number of surface scanning systems with the ability to quickly and easily obtain 3D digital representations of the foot are now commercially available. This review aims to present a summary of the reported use of these technologies in footwear development, the design of customised orthotics, and investigations for other ergonomic purposes related to the foot. Methods The PubMed and ScienceDirect databases were searched. Reference lists and experts in the field were also consulted to identify additional articles. Studies in English which had 3D surface scanning of the foot as an integral element of their protocol were included in the review. Results Thirty-eight articles meeting the search criteria were included. Advantages and disadvantages of using 3D surface scanning systems are highlighted. A meta-analysis of studies using scanners to investigate the changes in foot dimensions during varying levels of weight bearing was carried out. Conclusions Modern 3D surface scanning systems can obtain accurate and repeatable digital representations of the foot shape and have been successfully used in medical, ergonomic and footwear development applications. The increasing affordability of these systems presents opportunities for researchers investigating the foot and for manufacturers of foot related apparel and devices, particularly those interested in producing items that are customised to the individual. Suggestions are made for future areas of research and for the standardization of the protocols used to produce foot scans. PMID:20815914
Validation of search filters for identifying pediatric studies in PubMed.
Leclercq, Edith; Leeflang, Mariska M G; van Dalen, Elvira C; Kremer, Leontien C M
2013-03-01
To identify and validate PubMed search filters for retrieving studies including children and to develop a new pediatric search filter for PubMed. We developed 2 different datasets of studies to evaluate the performance of the identified pediatric search filters, expressed in terms of sensitivity, precision, specificity, accuracy, and number needed to read (NNR). An optimal search filter will have a high sensitivity and high precision with a low NNR. In addition to the PubMed Limits: All Child: 0-18 years filter (in May 2012 renamed to PubMed Filter Child: 0-18 years), 6 search filters for identifying studies including children were identified: 3 developed by Kastner et al, 1 developed by BestBets, one by the Child Health Field, and 1 by the Cochrane Childhood Cancer Group. Three search filters (Cochrane Childhood Cancer Group, Child Health Field, and BestBets) had the highest sensitivity (99.3%, 99.5%, and 99.3%, respectively) but a lower precision (64.5%, 68.4%, and 66.6% respectively) compared with the other search filters. Two Kastner search filters had a high precision (93.0% and 93.7%, respectively) but a low sensitivity (58.5% and 44.8%, respectively). They failed to identify many pediatric studies in our datasets. The search terms responsible for false-positive results in the reference dataset were determined. With these data, we developed a new search filter for identifying studies with children in PubMed with an optimal sensitivity (99.5%) and precision (69.0%). Search filters to identify studies including children either have a low sensitivity or a low precision with a high NNR. A new pediatric search filter with a high sensitivity and a low NNR has been developed. Copyright © 2013 Mosby, Inc. All rights reserved.
Chew, Avenell L.; Lamey, Tina; McLaren, Terri; De Roach, John
2016-01-01
Purpose To present en face optical coherence tomography (OCT) images generated by graph-search theory algorithm-based custom software and examine correlation with other imaging modalities. Methods En face OCT images derived from high density OCT volumetric scans of 3 healthy subjects and 4 patients using a custom algorithm (graph-search theory) and commercial software (Heidelberg Eye Explorer software (Heidelberg Engineering)) were compared and correlated with near infrared reflectance, fundus autofluorescence, adaptive optics flood-illumination ophthalmoscopy (AO-FIO) and microperimetry. Results Commercial software was unable to generate accurate en face OCT images in eyes with retinal pigment epithelium (RPE) pathology due to segmentation error at the level of Bruch’s membrane (BM). Accurate segmentation of the basal RPE and BM was achieved using custom software. The en face OCT images from eyes with isolated interdigitation or ellipsoid zone pathology were of similar quality between custom software and Heidelberg Eye Explorer software in the absence of any other significant outer retinal pathology. En face OCT images demonstrated angioid streaks, lesions of acute macular neuroretinopathy, hydroxychloroquine toxicity and Bietti crystalline deposits that correlated with other imaging modalities. Conclusions Graph-search theory algorithm helps to overcome the limitations of outer retinal segmentation inaccuracies in commercial software. En face OCT images can provide detailed topography of the reflectivity within a specific layer of the retina which correlates with other forms of fundus imaging. Our results highlight the need for standardization of image reflectivity to facilitate quantification of en face OCT images and longitudinal analysis. PMID:27959968
Death, dying and informatics: misrepresenting religion on MedLine.
Rodríguez Del Pozo, Pablo; Fins, Joseph J
2005-07-01
The globalization of medical science carries for doctors worldwide a correlative duty to deepen their understanding of patients' cultural contexts and religious backgrounds, in order to satisfy each as a unique individual. To become better informed, practitioners may turn to MedLine, but it is unclear whether the information found there is an accurate representation of culture and religion. To test MedLine's representation of this field, we chose the topic of death and dying in the three major monotheistic religions. We searched MedLine using PubMed in order to retrieve and thematically analyze full-length scholarly journal papers or case reports dealing with religious traditions and end-of-life care. Our search consisted of a string of words that included the most common denominations of the three religions, the standard heading terms used by the National Reference Center for Bioethics Literature (NRCBL), and the Medical Subject Headings (MeSH) used by the National Library of Medicine. Eligible articles were limited to English-language papers with an abstract. We found that while a bibliographic search in MedLine on this topic produced instant results and some valuable literature, the aggregate reflected a selection bias. American writers were over-represented given the global prevalence of these religious traditions. Denominationally affiliated authors predominated in representing the Christian traditions. The Islamic tradition was under-represented. MedLine's capability to identify the most current, reliable and accurate information about purely scientific topics should not be assumed to be the same case when considering the interface of religion, culture and end-of-life care.
Death, dying and informatics: misrepresenting religion on MedLine
Rodríguez del Pozo, Pablo; Fins, Joseph J
2005-01-01
Background The globalization of medical science carries for doctors worldwide a correlative duty to deepen their understanding of patients' cultural contexts and religious backgrounds, in order to satisfy each as a unique individual. To become better informed, practitioners may turn to MedLine, but it is unclear whether the information found there is an accurate representation of culture and religion. To test MedLine's representation of this field, we chose the topic of death and dying in the three major monotheistic religions. Methods We searched MedLine using PubMed in order to retrieve and thematically analyze full-length scholarly journal papers or case reports dealing with religious traditions and end-of-life care. Our search consisted of a string of words that included the most common denominations of the three religions, the standard heading terms used by the National Reference Center for Bioethics Literature (NRCBL), and the Medical Subject Headings (MeSH) used by the National Library of Medicine. Eligible articles were limited to English-language papers with an abstract. Results We found that while a bibliographic search in MedLine on this topic produced instant results and some valuable literature, the aggregate reflected a selection bias. American writers were over-represented given the global prevalence of these religious traditions. Denominationally affiliated authors predominated in representing the Christian traditions. The Islamic tradition was under-represented. Conclusion MedLine's capability to identify the most current, reliable and accurate information about purely scientific topics should not be assumed to be the same case when considering the interface of religion, culture and end-of-life care. PMID:15992401
Visual perception-based criminal identification: a query-based approach
NASA Astrophysics Data System (ADS)
Singh, Avinash Kumar; Nandi, G. C.
2017-01-01
The visual perception of eyewitness plays a vital role in criminal identification scenario. It helps law enforcement authorities in searching particular criminal from their previous record. It has been reported that searching a criminal record manually requires too much time to get the accurate result. We have proposed a query-based approach which minimises the computational cost along with the reduction of search space. A symbolic database has been created to perform a stringent analysis on 150 public (Bollywood celebrities and Indian cricketers) and 90 local faces (our data-set). An expert knowledge has been captured to encapsulate every criminal's anatomical and facial attributes in the form of symbolic representation. A fast query-based searching strategy has been implemented using dynamic decision tree data structure which allows four levels of decomposition to fetch respective criminal records. Two types of case studies - viewed and forensic sketches have been considered to evaluate the strength of our proposed approach. We have derived 1200 views of the entire population by taking into consideration 80 participants as eyewitness. The system demonstrates an accuracy level of 98.6% for test case I and 97.8% for test case II. It has also been reported that experimental results reduce the search space up to 30 most relevant records.
What Do Dogs know about Hidden Objects?
Miller, Holly C.; Rayburn-Reeves, Rebecca; Zentall, Thomas R.
2009-01-01
Previous research has found that dogs will search accurately for an invisibly displaced object when the task is simplified and contextual ambiguity is eliminated (Doré et. al., 1996; Miller et. al., 2008). For example, when an object is placed inside of one of two identical occluders attached to either end of a rotating beam and that beam is rotated 90°, dogs search inside of the appropriate occluder. The current research confirmed this finding and tested the possibility that the dogs were using a perceptual/conditioning mechanism (i.e., their gaze was drawn to the occluder as the object was placed inside and they continued looking at it as it rotated). The test was done by introducing a delay between the displacement of the object and the initiation of the dogs’ search. In Experiment 1, during the delay, a barrier was placed between the dog and the apparatus. In Experiment 2, the lights were turned off during the delay. The search accuracy for some dogs was strongly affected by the delay, however, search accuracy for other dogs was not affected. These results suggest that although a perceptual/conditioning mechanism may be involved for some dogs, it cannot account for the performance of others. It is likely that these other dogs showed true object permanence. PMID:19520244
Increased Complexities in Visual Search Behavior in Skilled Players for a Self-Paced Aiming Task
Chia, Jingyi S.; Burns, Stephen F.; Barrett, Laura A.; Chow, Jia Y.
2017-01-01
The badminton serve is an important shot for winning a rally in a match. It combines good technique with the ability to accurately integrate visual information from the shuttle, racket, opponent, and intended landing point. Despite its importance and repercussive nature, to date no study has looked at the visual search behaviors during badminton service in the singles discipline. Unlike anticipatory tasks (e.g., shot returns), the serve presents an opportunity to explore the role of visual search behaviors in movement control for self-paced tasks. Accordingly, this study examined skill-related differences in visual behavior during the badminton singles serve. Skilled (n = 12) and less skilled (n = 12) participants performed 30 serves to a live opponent, while real-time eye movements were captured using a mobile gaze registration system. Frame-by-frame analyses of 662 serves were made and the skilled players took a longer preparatory time before serving. Visual behavior of the skilled players was characterized by significantly greater number of fixations on more areas of interest per trial than the less skilled. In addition, the skilled players spent a significantly longer time fixating on the court and net, whereas the less skilled players found the shuttle to be more informative. Quiet eye (QE) duration (indicative of superior sports performance) however, did not differ significantly between groups which has implications on the perceived importance of QE in the badminton serve. Moreover, while visual behavior differed by skill level, considerable individual differences were also observed especially within the skilled players. This augments the need for not just group-level analyses, but individualized analysis for a more accurate representation of visual behavior. Findings from this study thus provide an insight to the possible visual search strategies as players serve in net-barrier games. Moreover, this study highlighted an important aspect of badminton relating to deception and the implications of interpreting visual behavior of players. PMID:28659850
Increased Complexities in Visual Search Behavior in Skilled Players for a Self-Paced Aiming Task.
Chia, Jingyi S; Burns, Stephen F; Barrett, Laura A; Chow, Jia Y
2017-01-01
The badminton serve is an important shot for winning a rally in a match. It combines good technique with the ability to accurately integrate visual information from the shuttle, racket, opponent, and intended landing point. Despite its importance and repercussive nature, to date no study has looked at the visual search behaviors during badminton service in the singles discipline. Unlike anticipatory tasks (e.g., shot returns), the serve presents an opportunity to explore the role of visual search behaviors in movement control for self-paced tasks. Accordingly, this study examined skill-related differences in visual behavior during the badminton singles serve. Skilled ( n = 12) and less skilled ( n = 12) participants performed 30 serves to a live opponent, while real-time eye movements were captured using a mobile gaze registration system. Frame-by-frame analyses of 662 serves were made and the skilled players took a longer preparatory time before serving. Visual behavior of the skilled players was characterized by significantly greater number of fixations on more areas of interest per trial than the less skilled. In addition, the skilled players spent a significantly longer time fixating on the court and net, whereas the less skilled players found the shuttle to be more informative. Quiet eye (QE) duration (indicative of superior sports performance) however, did not differ significantly between groups which has implications on the perceived importance of QE in the badminton serve. Moreover, while visual behavior differed by skill level, considerable individual differences were also observed especially within the skilled players. This augments the need for not just group-level analyses, but individualized analysis for a more accurate representation of visual behavior. Findings from this study thus provide an insight to the possible visual search strategies as players serve in net-barrier games. Moreover, this study highlighted an important aspect of badminton relating to deception and the implications of interpreting visual behavior of players.
He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei
2012-06-25
Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the computational time significantly while keeping high prediction accuracy.
Model-based reinforcement learning with dimension reduction.
Tangkaratt, Voot; Morimoto, Jun; Sugiyama, Masashi
2016-12-01
The goal of reinforcement learning is to learn an optimal policy which controls an agent to acquire the maximum cumulative reward. The model-based reinforcement learning approach learns a transition model of the environment from data, and then derives the optimal policy using the transition model. However, learning an accurate transition model in high-dimensional environments requires a large amount of data which is difficult to obtain. To overcome this difficulty, in this paper, we propose to combine model-based reinforcement learning with the recently developed least-squares conditional entropy (LSCE) method, which simultaneously performs transition model estimation and dimension reduction. We also further extend the proposed method to imitation learning scenarios. The experimental results show that policy search combined with LSCE performs well for high-dimensional control tasks including real humanoid robot control. Copyright © 2016 Elsevier Ltd. All rights reserved.
Atomic-resolution transmission electron microscopy of electron beam–sensitive crystalline materials
NASA Astrophysics Data System (ADS)
Zhang, Daliang; Zhu, Yihan; Liu, Lingmei; Ying, Xiangrong; Hsiung, Chia-En; Sougrat, Rachid; Li, Kun; Han, Yu
2018-02-01
High-resolution imaging of electron beam–sensitive materials is one of the most difficult applications of transmission electron microscopy (TEM). The challenges are manifold, including the acquisition of images with extremely low beam doses, the time-constrained search for crystal zone axes, the precise image alignment, and the accurate determination of the defocus value. We develop a suite of methods to fulfill these requirements and acquire atomic-resolution TEM images of several metal organic frameworks that are generally recognized as highly sensitive to electron beams. The high image resolution allows us to identify individual metal atomic columns, various types of surface termination, and benzene rings in the organic linkers. We also apply our methods to other electron beam–sensitive materials, including the organic-inorganic hybrid perovskite CH3NH3PbBr3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnes, P.; et al.
A Geant4-based Monte Carlo package named G4DS has been developed to simulate the response of DarkSide-50, an experiment operating since 2013 at LNGS, designed to detect WIMP interactions in liquid argon. In the process of WIMP searches, DarkSide-50 has achieved two fundamental milestones: the rejection of electron recoil background with a power of ~10^7, using the pulse shape discrimination technique, and the measurement of the residual 39Ar contamination in underground argon, ~3 orders of magnitude lower with respect to atmospheric argon. These results rely on the accurate simulation of the detector response to the liquid argon scintillation, its ionization, andmore » electron-ion recombination processes. This work provides a complete overview of the DarkSide Monte Carlo and of its performance, with a particular focus on PARIS, the custom-made liquid argon response model.« less
A Compact Instrument for Remote Raman and Fluorescence Measurements to a Radial Distance of 100 m
NASA Technical Reports Server (NTRS)
Sharma, S. K.; Misra, A. K.; Lucey, P. g.; McKay, C. P.
2005-01-01
Compact remote spectroscopic instruments that could provide detailed information about mineralogy, organic and biomaterials on a planetary surface over a relatively large area are desirable for NASA s planetary exploration program. Ability to explore a large area on the planetary surfaces as well as in impact craters from a fixed location of a rover or lander will enhance the probability of selecting target rocks of high scientific contents as well as desirable sites in search of organic compounds and biomarkers on Mars and other planetary bodies. We have developed a combined remote inelastic scattering (Raman) and laser-induced fluorescence emission (LIFE) compact instrument capable of providing accurate information about minerals, organic and biogenic materials to a radial distance of 100 m. Here we present the Raman and LIFE (R-LIFE) data set.
Marginal space learning for efficient detection of 2D/3D anatomical structures in medical images.
Zheng, Yefeng; Georgescu, Bogdan; Comaniciu, Dorin
2009-01-01
Recently, marginal space learning (MSL) was proposed as a generic approach for automatic detection of 3D anatomical structures in many medical imaging modalities [1]. To accurately localize a 3D object, we need to estimate nine pose parameters (three for position, three for orientation, and three for anisotropic scaling). Instead of exhaustively searching the original nine-dimensional pose parameter space, only low-dimensional marginal spaces are searched in MSL to improve the detection speed. In this paper, we apply MSL to 2D object detection and perform a thorough comparison between MSL and the alternative full space learning (FSL) approach. Experiments on left ventricle detection in 2D MRI images show MSL outperforms FSL in both speed and accuracy. In addition, we propose two novel techniques, constrained MSL and nonrigid MSL, to further improve the efficiency and accuracy. In many real applications, a strong correlation may exist among pose parameters in the same marginal spaces. For example, a large object may have large scaling values along all directions. Constrained MSL exploits this correlation for further speed-up. The original MSL only estimates the rigid transformation of an object in the image, therefore cannot accurately localize a nonrigid object under a large deformation. The proposed nonrigid MSL directly estimates the nonrigid deformation parameters to improve the localization accuracy. The comparison experiments on liver detection in 226 abdominal CT volumes demonstrate the effectiveness of the proposed methods. Our system takes less than a second to accurately detect the liver in a volume.
ISE: An Integrated Search Environment. The manual
NASA Technical Reports Server (NTRS)
Chu, Lon-Chan
1992-01-01
Integrated Search Environment (ISE), a software package that implements hierarchical searches with meta-control, is described in this manual. ISE is a collection of problem-independent routines to support solving searches. Mainly, these routines are core routines for solving a search problem and they handle the control of searches and maintain the statistics related to searches. By separating the problem-dependent and problem-independent components in ISE, new search methods based on a combination of existing methods can be developed by coding a single master control program. Further, new applications solved by searches can be developed by coding the problem-dependent parts and reusing the problem-independent parts already developed. Potential users of ISE are designers of new application solvers and new search algorithms, and users of experimental application solvers and search algorithms. The ISE is designed to be user-friendly and information rich. In this manual, the organization of ISE is described and several experiments carried out on ISE are also described.
A spatial model of the efficiency of T cell search in the influenza-infected lung.
Levin, Drew; Forrest, Stephanie; Banerjee, Soumya; Clay, Candice; Cannon, Judy; Moses, Melanie; Koster, Frederick
2016-06-07
Emerging strains of influenza, such as avian H5N1 and 2009 pandemic H1N1, are more virulent than seasonal H1N1 influenza, yet the underlying mechanisms for these differences are not well understood. Subtle differences in how a given strain interacts with the immune system are likely a key factor in determining virulence. One aspect of the interaction is the ability of T cells to locate the foci of the infection in time to prevent uncontrolled expansion. Here, we develop an agent based spatial model to focus on T cell migration from lymph nodes through the vascular system to sites of infection. We use our model to investigate whether different strains of influenza modulate this process. We calibrate the model using viral and chemokine secretion rates we measure in vitro together with values taken from literature. The spatial nature of the model reveals unique challenges for T cell recruitment that are not apparent in standard differential equation models. In this model comparing three influenza viruses, plaque expansion is governed primarily by the replication rate of the virus strain, and the efficiency of the T cell search-and-kill is limited by the density of infected epithelial cells in each plaque. Thus for each virus there is a different threshold of T cell search time above which recruited T cells are unable to control further expansion. Future models could use this relationship to more accurately predict control of the infection. Copyright © 2016 Elsevier Ltd. All rights reserved.
CONCAM's Fuzzy-Logic All-Sky Star Recognition Algorithm
NASA Astrophysics Data System (ADS)
Shamir, L.; Nemiroff, R. J.
2004-05-01
One of the purposes of the global Night Sky Live (NSL) network of fisheye CONtinuous CAMeras (CONCAMs) is to monitor and archive the entire bright night sky, track stellar variability, and search for transients. The high quality of raw CONCAM data allows automation of stellar object recognition, although distortions of the fisheye lens and frequent slight shifts in CONCAM orientations can make even this seemingly simple task formidable. To meet this challenge, a fuzzy logic based algorithm has been developed that transforms (x,y) image coordinates in the CCD frame into fuzzy right ascension and declination coordinates for use in matching with star catalogs. Using a training set of reference stars, the algorithm statically builds the fuzzy logic model. At runtime, the algorithm searches for peaks, and then applies the fuzzy logic model to perform the coordinate transformation before choosing the optimal star catalog match. The present fuzzy-logic algorithm works much better than our first generation, straightforward coordinate transformation formula. Following this essential step, algorithms dealing with the higher level data products can then provide a stream of photometry for a few hundred stellar objects visible in the night sky. Accurate photometry further enables the computation of all-sky maps of skyglow and opacity, as well as a search for uncataloged transients. All information is stored in XML-like tagged ASCII files that are instantly copied to the public domain and available at http://NightSkyLive.net. Currently, the NSL software detects stars and creates all-sky image files from eight different locations around the globe every 3 minutes and 56 seconds.
Structure solution of DNA-binding proteins and complexes with ARCIMBOLDO libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pröpper, Kevin; Instituto de Biologia Molecular de Barcelona; Meindl, Kathrin
2014-06-01
The structure solution of DNA-binding protein structures and complexes based on the combination of location of DNA-binding protein motif fragments with density modification in a multi-solution frame is described. Protein–DNA interactions play a major role in all aspects of genetic activity within an organism, such as transcription, packaging, rearrangement, replication and repair. The molecular detail of protein–DNA interactions can be best visualized through crystallography, and structures emphasizing insight into the principles of binding and base-sequence recognition are essential to understanding the subtleties of the underlying mechanisms. An increasing number of high-quality DNA-binding protein structure determinations have been witnessed despite themore » fact that the crystallographic particularities of nucleic acids tend to pose specific challenges to methods primarily developed for proteins. Crystallographic structure solution of protein–DNA complexes therefore remains a challenging area that is in need of optimized experimental and computational methods. The potential of the structure-solution program ARCIMBOLDO for the solution of protein–DNA complexes has therefore been assessed. The method is based on the combination of locating small, very accurate fragments using the program Phaser and density modification with the program SHELXE. Whereas for typical proteins main-chain α-helices provide the ideal, almost ubiquitous, small fragments to start searches, in the case of DNA complexes the binding motifs and DNA double helix constitute suitable search fragments. The aim of this work is to provide an effective library of search fragments as well as to determine the optimal ARCIMBOLDO strategy for the solution of this class of structures.« less
Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.
Giedt, Joel; Thomas, Anthony W; Young, Ross D
2009-11-13
Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.
Zhang, H H; Gao, S; Chen, W; Shi, L; D'Souza, W D; Meyer, R R
2013-03-21
An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equallyspaced beams (eplans), we have developed a global search metaheuristic process based on the nested partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are of superior quality.
A method for detecting and characterizing outbreaks of infectious disease from clinical reports.
Cooper, Gregory F; Villamarin, Ricardo; Rich Tsui, Fu-Chiang; Millett, Nicholas; Espino, Jeremy U; Wagner, Michael M
2015-02-01
Outbreaks of infectious disease can pose a significant threat to human health. Thus, detecting and characterizing outbreaks quickly and accurately remains an important problem. This paper describes a Bayesian framework that links clinical diagnosis of individuals in a population to epidemiological modeling of disease outbreaks in the population. Computer-based diagnosis of individuals who seek healthcare is used to guide the search for epidemiological models of population disease that explain the pattern of diagnoses well. We applied this framework to develop a system that detects influenza outbreaks from emergency department (ED) reports. The system diagnoses influenza in individuals probabilistically from evidence in ED reports that are extracted using natural language processing. These diagnoses guide the search for epidemiological models of influenza that explain the pattern of diagnoses well. Those epidemiological models with a high posterior probability determine the most likely outbreaks of specific diseases; the models are also used to characterize properties of an outbreak, such as its expected peak day and estimated size. We evaluated the method using both simulated data and data from a real influenza outbreak. The results provide support that the approach can detect and characterize outbreaks early and well enough to be valuable. We describe several extensions to the approach that appear promising. Copyright © 2014 Elsevier Inc. All rights reserved.
Zhang, H H; Gao, S; Chen, W; Shi, L; D’Souza, W D; Meyer, R R
2013-01-01
An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equally-spaced beams (eplans), we have developed a global search metaheuristic process based on the Nested Partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are superior quality. PMID:23459411
Haneda, Kiyofumi; Umeda, Tokuo; Koyama, Tadashi; Harauchi, Hajime; Inamura, Kiyonari
2002-01-01
The target of our study is to establish the methodology for analyzing level of security requirements, for searching suitable security measures and for optimizing security distribution to every portion of medical practice. Quantitative expression must be introduced to our study as possible for the purpose of easy follow up of security procedures and easy evaluation of security outcomes or results. Results of system analysis by fault tree analysis (FTA) clarified that subdivided system elements in detail contribute to much more accurate analysis. Such subdivided composition factors very much depended on behavior of staff, interactive terminal devices, kinds of service, and routes of network. As conclusion, we found the methods to analyze levels of security requirements for each medical information systems employing FTA, basic events for each composition factor and combination of basic events. Methods for searching suitable security measures were found. Namely risk factors for each basic event, number of elements for each composition factor and candidates of security measure elements were found. Method to optimize the security measures for each medical information system was proposed. Namely optimum distribution of risk factors in terms of basic events were figured out, and comparison of them between each medical information systems became possible.
VIOLIN: vaccine investigation and online information network.
Xiang, Zuoshuang; Todd, Thomas; Ku, Kim P; Kovacic, Bethany L; Larson, Charles B; Chen, Fang; Hodges, Andrew P; Tian, Yuying; Olenzek, Elizabeth A; Zhao, Boyang; Colby, Lesley A; Rush, Howard G; Gilsdorf, Janet R; Jourdian, George W; He, Yongqun
2008-01-01
Vaccines are among the most efficacious and cost-effective tools for reducing morbidity and mortality caused by infectious diseases. The vaccine investigation and online information network (VIOLIN) is a web-based central resource, allowing easy curation, comparison and analysis of vaccine-related research data across various human pathogens (e.g. Haemophilus influenzae, human immunodeficiency virus (HIV) and Plasmodium falciparum) of medical importance and across humans, other natural hosts and laboratory animals. Vaccine-related peer-reviewed literature data have been downloaded into the database from PubMed and are searchable through various literature search programs. Vaccine data are also annotated, edited and submitted to the database through a web-based interactive system that integrates efficient computational literature mining and accurate manual curation. Curated information includes general microbial pathogenesis and host protective immunity, vaccine preparation and characteristics, stimulated host responses after vaccination and protection efficacy after challenge. Vaccine-related pathogen and host genes are also annotated and available for searching through customized BLAST programs. All VIOLIN data are available for download in an eXtensible Markup Language (XML)-based data exchange format. VIOLIN is expected to become a centralized source of vaccine information and to provide investigators in basic and clinical sciences with curated data and bioinformatics tools for vaccine research and development. VIOLIN is publicly available at http://www.violinet.org.
3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head
NASA Astrophysics Data System (ADS)
Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan
2010-03-01
Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).
Williamson, A M; Feyer, A M; Mattick, R P; Friswell, R; Finlay-Brown, S
2001-05-01
The effects of 28 h of sleep deprivation were compared with varying doses of alcohol up to 0.1% blood alcohol concentration (BAC) in the same subjects. The study was conducted in the laboratory. Twenty long-haul truck drivers and 19 people not employed as professional drivers acted as subjects. Tests were selected that were likely to be affected by fatigue, including simple reaction time, unstable tracking, dual task, Mackworth clock vigilance test, symbol digit coding, visual search, sequential spatial memory and logical reasoning. While performance effects were seen due to alcohol for all tests, sleep deprivation affected performance on most tests, but had no effect on performance on the visual search and logical reasoning tests. Some tests showed evidence of a circadian rhythm effect on performance, in particular, simple reaction time, dual task, Mackworth clock vigilance, and symbol digit coding, but only for response speed and not response accuracy. Drivers were slower but more accurate than controls on the symbol digit test, suggesting that they took a more conservative approach to performance of this test. This study demonstrated which tests are most sensitive to sleep deprivation and fatigue. The study therefore has established a set of tests that can be used in evaluations of fatigue and fatigue countermeasures.
PSOVina: The hybrid particle swarm optimization algorithm for protein-ligand docking.
Ng, Marcus C K; Fong, Simon; Siu, Shirley W I
2015-06-01
Protein-ligand docking is an essential step in modern drug discovery process. The challenge here is to accurately predict and efficiently optimize the position and orientation of ligands in the binding pocket of a target protein. In this paper, we present a new method called PSOVina which combined the particle swarm optimization (PSO) algorithm with the efficient Broyden-Fletcher-Goldfarb-Shannon (BFGS) local search method adopted in AutoDock Vina to tackle the conformational search problem in docking. Using a diverse data set of 201 protein-ligand complexes from the PDBbind database and a full set of ligands and decoys for four representative targets from the directory of useful decoys (DUD) virtual screening data set, we assessed the docking performance of PSOVina in comparison to the original Vina program. Our results showed that PSOVina achieves a remarkable execution time reduction of 51-60% without compromising the prediction accuracies in the docking and virtual screening experiments. This improvement in time efficiency makes PSOVina a better choice of a docking tool in large-scale protein-ligand docking applications. Our work lays the foundation for the future development of swarm-based algorithms in molecular docking programs. PSOVina is freely available to non-commercial users at http://cbbio.cis.umac.mo .
A Method for Detecting and Characterizing Outbreaks of Infectious Disease from Clinical Reports
Cooper, Gregory F.; Villamarin, Ricardo; Tsui, Fu-Chiang (Rich); Millett, Nicholas; Espino, Jeremy U.; Wagner, Michael M.
2014-01-01
Outbreaks of infectious disease can pose a significant threat to human health. Thus, detecting and characterizing outbreaks quickly and accurately remains an important problem. This paper describes a Bayesian framework that links clinical diagnosis of individuals in a population to epidemiological modeling of disease outbreaks in the population. Computer-based diagnosis of individuals who seek healthcare is used to guide the search for epidemiological models of population disease that explain the pattern of diagnoses well. We applied this framework to develop a system that detects influenza outbreaks from emergency department (ED) reports. The system diagnoses influenza in individuals probabilistically from evidence in ED reports that are extracted using natural language processing. These diagnoses guide the search for epidemiological models of influenza that explain the pattern of diagnoses well. Those epidemiological models with a high posterior probability determine the most likely outbreaks of specific diseases; the models are also used to characterize properties of an outbreak, such as its expected peak day and estimated size. We evaluated the method using both simulated data and data from a real influenza outbreak. The results provide support that the approach can detect and characterize outbreaks early and well enough to be valuable. We describe several extensions to the approach that appear promising. PMID:25181466
Goher, K M; Almeshal, A M; Agouri, S A; Nasir, A N K; Tokhi, M O; Alenezi, M R; Al Zanki, T; Fadlallah, S O
2017-01-01
This paper presents the implementation of the hybrid spiral-dynamic bacteria-chemotaxis (HSDBC) approach to control two different configurations of a two-wheeled vehicle. The HSDBC is a combination of bacterial chemotaxis used in bacterial forging algorithm (BFA) and the spiral-dynamic algorithm (SDA). BFA provides a good exploration strategy due to the chemotaxis approach. However, it endures an oscillation problem near the end of the search process when using a large step size. Conversely; for a small step size, it affords better exploitation and accuracy with slower convergence. SDA provides better stability when approaching an optimum point and has faster convergence speed. This may cause the search agents to get trapped into local optima which results in low accurate solution. HSDBC exploits the chemotactic strategy of BFA and fitness accuracy and convergence speed of SDA so as to overcome the problems associated with both the SDA and BFA algorithms alone. The HSDBC thus developed is evaluated in optimizing the performance and energy consumption of two highly nonlinear platforms, namely single and double inverted pendulum-like vehicles with an extended rod. Comparative results with BFA and SDA show that the proposed algorithm is able to result in better performance of the highly nonlinear systems.
AMSNEXRAD-Automated detection of meteorite strewnfields in doppler weather radar
NASA Astrophysics Data System (ADS)
Hankey, Michael; Fries, Marc; Matson, Rob; Fries, Jeff
2017-09-01
For several years meteorite recovery in the United States has been greatly enhanced by using Doppler weather radar images to determine possible fall zones for meteorites produced by witnessed fireballs. While most fireball events leave no record on the Doppler radar, some large fireballs do. Based on the successful recovery of 10 meteorite falls 'under the radar', and the discovery of radar on more than 10 historic falls, it is believed that meteoritic dust and or actual meteorites falling to the ground have been recorded on Doppler weather radar (Fries et al., 2014). Up until this point, the process of detecting the radar signatures associated with meteorite falls has been a manual one and dependent on prior accurate knowledge of the fall time and estimated ground track. This manual detection process is labor intensive and can take several hours per event. Recent technological developments by NOAA now help enable the automation of these tasks. This in combination with advancements by the American Meteor Society (Hankey et al., 2014) in the tracking and plotting of witnessed fireballs has opened the possibility for automatic detection of meteorites in NEXRAD Radar Archives. Here in the processes for fireball triangulation, search area determination, radar interfacing, data extraction, storage, search, detection and plotting are explained.
Measurement of acute pain in infants: a review of behavioral and physiological variables.
Hatfield, Linda A; Ely, Elizabeth A
2015-01-01
The use of non-validated pain measurement tools to assess infant pain represents a serious iatrogenic threat to the developing neonatal nervous system. One partial explanation for this practice may be the contradictory empirical data from studies that use newborn pain management tools constructed for infants of different developmental stages or exposed to different environmental stressors. The purpose of this review is to evaluate the evidence regarding the physiologic and behavioral variables that accurately assess and measure acute pain response in infants. A literature search was conducted using PUBMED and CINAHL and the search terms infant, neonate/neonatal, newborn, pain, assessment, and measurement to identify peer-reviewed studies that examined the validity and reliability of behavioral and physiological variables used for investigation of infant pain. Ten articles were identified for critical review. Strong evidence supports the use of the behavioral variables of facial expressions and body movements and the physiologic variables of heart rate and oxygen saturation to assess acute pain in infants. It is incumbent upon researchers and clinical nurses to ensure the validity, reliability, and feasibility of pain measures, so that the outcomes of their investigations and interventions will be developmentally appropriate and effective pain management therapies. © The Author(s) 2014.
Caveats of smartphone applications for the cardiothoracic trainee.
Edlin, Joy C E; Deshpande, Ranjit P
2013-12-01
The clinical environment is becoming increasingly dominated by information technology, most recently the smartphone with its applications (apps) of a multitude of uses. There are already tens of thousands of medical apps available for download, to educate both patients and trainees, and many more are being designed to facilitate delivery of care. The rapid development of this technology has outgrown its quality evaluation and regulation, both urgently required to maintain patient safety, protect sensitive data, and ensure dissemination of accurate information. We review medical apps themed towards cardiothoracic surgery in terms of medical professional involvement in their content and design. iTunes and Play Store were searched for cardiothoracic surgery-themed medical apps, using the terms cardiothoracic, thoracic, cardiac, heart, lung, surgery, and variations thereof and including the term medical. A focused search yielded 379 apps, of which 6% were associated with a named medical professional, 15% with a publisher or professional society, and 63% with a user rating. The findings suggest inadequate input from the medical profession. The article discusses the pressing issues regarding quality evaluation, regulation, and information security, required for smartphones and handheld devices to become an integral and safe part of delivery of care. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.
New advances in focal therapy for early stage prostate cancer.
Tay, Kae Jack; Schulman, Ariel A; Sze, Christina; Tsivian, Efrat; Polascik, Thomas J
2017-08-01
Prostate focal therapy offers men the opportunity to achieve oncological control while preserving sexual and urinary function. The prerequisites for successful focal therapy are to accurately identify, localize and completely ablate the clinically significant cancer(s) within the prostate. We aim to evaluate the evidence for current and upcoming technologies that could shape the future of prostate cancer focal therapy in the next five years. Areas covered: Current literature on advances in patient selection using imaging, biopsy and biomarkers, ablation techniques and adjuvant treatments for focal therapy are summarized. A literature search of major databases was performed using the search terms 'focal therapy', 'focal ablation', 'partial ablation', 'targeted ablation', 'image guided therapy' and 'prostate cancer'. Expert commentary: Advanced radiological tools such as multiparametric magnetic resonance imaging (mpMRI), multiparametric ultrasound (mpUS), prostate-specific-membrane-antigen positron emission tomography (PSMA-PET) represent a revolution in the ability to understand cancer function and biology. Advances in ablative technologies now provide a menu of modalities that can be rationalized based on lesion location, size and perhaps in the near future, pre-determined resistance to therapy. However, these need to be carefully studied to establish their safety and efficacy parameters. Adjuvant strategies to enhance focal ablation are under development.
Search for Gravitational Waves Associated with γ-ray Bursts Detected by the Interplanetary Network
NASA Astrophysics Data System (ADS)
Aasi, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Affeldt, C.; Agathos, M.; Aggarwal, N.; Aguiar, O. D.; Ajith, P.; Alemic, A.; Allen, B.; Allocca, A.; Amariutei, D.; Andersen, M.; Anderson, R. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J. S.; Ast, S.; Aston, S. M.; Astone, P.; Aufmuth, P.; Augustus, H.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P. T.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barbet, M.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Bauchrowitz, J.; Bauer, Th. S.; Baune, C.; Bavigadda, V.; Behnke, B.; Bejger, M.; Beker, M. G.; Belczynski, C.; Bell, A. S.; Bell, C.; Bergmann, G.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biscans, S.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, Sukanta; Bosi, L.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brückner, F.; Buchman, S.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burman, R.; Buskulic, D.; Buy, C.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Castaldi, G.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Celerier, C.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C.; Colombini, M.; Cominsky, L.; Constancio, M.; Conte, A.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coulon, J.-P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Croce, R. P.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, C.; Dahl, K.; Dal Canton, T.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Dayanga, T.; DeBra, D.; Debreczeni, G.; Degallaix, J.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M.; Dickson, J.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Dominguez, E.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S.; Eberle, T.; Edo, T.; Edwards, M.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Endrőczi, G.; Essick, R.; Etzel, T.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fazi, D.; Fehrmann, H.; Fejer, M. M.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S.; Garufi, F.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Gräf, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Groot, P.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C. J.; Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Ha, J.; Hall, E. D.; Hamilton, W.; Hammer, D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hart, M.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Holt, K.; Hopkins, P.; Horrom, T.; Hoske, D.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hu, Y.; Huerta, E.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Idrisy, A.; Ingram, D. R.; Inta, R.; Islas, G.; Isogai, T.; Ivanov, A.; Iyer, B. R.; Izumi, K.; Jacobson, M.; Jang, H.; Jaranowski, P.; Ji, Y.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karlen, J.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Keiser, G. M.; Keitel, D.; Kelley, D. B.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, C.; Kim, K.; Kim, N. G.; Kim, N.; Kim, S.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Koehlenbeck, S.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, A.; Kumar, D. Nanda; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lam, P. K.; Landry, M.; Lantz, B.; Larson, S.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, J.; Lee, P. J.; Leonardi, M.; Leong, J. R.; Leonor, I.; Le Roux, A.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B.; Lewis, J.; Li, T. G. F.; Libbrecht, K.; Libson, A.; Lin, A. C.; Littenberg, T. B.; Lockerbie, N. A.; Lockett, V.; Lodhia, D.; Loew, K.; Logue, J.; Lombardi, A. L.; Lopez, E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M. J.; Lück, H.; Lundgren, A. P.; Ma, Y.; Macdonald, E. P.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R.; Mageswaran, M.; Maglione, C.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Manca, G. M.; Mandel, I.; Mandic, V.; Mangano, V.; Mangini, N. M.; Mansell, G.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Martinelli, L.; Martynov, D.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Matichard, F.; Matone, L.; Mavalvala, N.; May, G.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; McLin, K.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Meinders, M.; Melatos, A.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Mikhailov, E. E.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nagy, M. F.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nelemans, G.; Neri, I.; Neri, M.; Newton, G.; Nguyen, T.; Nielsen, A. B.; Nissanke, S.; Nitz, A. H.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Ochsner, E.; O'Dell, J.; Oelker, E.; Oh, J. J.; Oh, S. H.; Ohme, F.; Omar, S.; Oppermann, P.; Oram, R.; O'Reilly, B.; Ortega, W.; O'Shaughnessy, R.; Osthelder, C.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Padilla, C.; Pai, A.; Palashov, O.; Palomba, C.; Pan, H.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Paris, H.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poeld, J.; Poggiani, R.; Poteomkin, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qin, J.; Quetschke, V.; Quintero, E.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Raja, S.; Rajalakshmi, G.; Rakhmanov, M.; Ramet, C.; Ramirez, K.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Recchia, S.; Reed, C. M.; Regimbau, T.; Reid, S.; Reitze, D. H.; Reula, O.; Rhoades, E.; Ricci, F.; Riesen, R.; Riles, K.; Robertson, N. A.; Robinet, F.; Rocchi, A.; Roddy, S. B.; Rolland, L.; Rollins, J. G.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sanders, J. R.; Sankar, S.; Sannibale, V.; Santiago-Prieto, I.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R.; Scheuer, J.; Schilling, R.; Schilman, M.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siellez, K.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Singh, R.; Sintes, A. M.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Son, E. J.; Sorazu, B.; Souradeep, T.; Staley, A.; Stebbins, J.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Stephens, B. C.; Steplewski, S.; Stevenson, S.; Stone, R.; Stops, D.; Strain, K. A.; Straniero, N.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tao, J.; Tarabrin, S. P.; Taylor, R.; Tellez, G.; Thirugnanasambandam, M. P.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Travasso, F.; Traylor, G.; Tse, M.; Tshilumba, D.; Tuennermann, H.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Sluys, M. V.; van Heijningen, J.; van Veggel, A. A.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vincent-Finley, R.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Vousden, W. D.; Vyachanin, S. P.; Wade, A. R.; Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, M.; Wang, X.; Ward, R. L.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Williams, K.; Williams, L.; Williams, R.; Williams, T. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wolovick, N.; Worden, J.; Wu, Y.; Yablon, J.; Yakushin, I.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yang, H.; Yoshida, S.; Yvert, M.; ZadroŻny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, Fan; Zhang, L.; Zhao, C.; Zhu, H.; Zhu, X. J.; Zucker, M. E.; Zuraw, S.; Zweizig, J.; Aptekar, R. L.; Atteia, J. L.; Cline, T.; Connaughton, V.; Frederiks, D. D.; Golenetskii, S. V.; Hurley, K.; Krimm, H. A.; Marisaldi, M.; Pal'shin, V. D.; Palmer, D.; Svinkin, D. S.; Terada, Y.; von Kienlin, A.; LIGO Scientific Collaboration; Virgo Collaboration; IPN Collaboration
2014-07-01
We present the results of a search for gravitational waves associated with 223 γ-ray bursts (GRBs) detected by the InterPlanetary Network (IPN) in 2005-2010 during LIGO's fifth and sixth science runs and Virgo's first, second, and third science runs. The IPN satellites provide accurate times of the bursts and sky localizations that vary significantly from degree scale to hundreds of square degrees. We search for both a well-modeled binary coalescence signal, the favored progenitor model for short GRBs, and for generic, unmodeled gravitational wave bursts. Both searches use the event time and sky localization to improve the gravitational wave search sensitivity as compared to corresponding all-time, all-sky searches. We find no evidence of a gravitational wave signal associated with any of the IPN GRBs in the sample, nor do we find evidence for a population of weak gravitational wave signals associated with the GRBs. For all IPN-detected GRBs, for which a sufficient duration of quality gravitational wave data are available, we place lower bounds on the distance to the source in accordance with an optimistic assumption of gravitational wave emission energy of 10-2M⊙c2 at 150 Hz, and find a median of 13 Mpc. For the 27 short-hard GRBs we place 90% confidence exclusion distances to two source models: a binary neutron star coalescence, with a median distance of 12 Mpc, or the coalescence of a neutron star and black hole, with a median distance of 22 Mpc. Finally, we combine this search with previously published results to provide a population statement for GRB searches in first-generation LIGO and Virgo gravitational wave detectors and a resulting examination of prospects for the advanced gravitational wave detectors.
NASA Technical Reports Server (NTRS)
Aasi, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Acernese, F.; Blackbum, L.; Camp, J. B.; Gehrels, N.; Graff, P. B.;
2014-01-01
We present the results of a search for gravitational waves associated with 223 gamma ray bursts (GRBs) detected by the InterPlanetary Network (IPN) in 2005-2010 during LIGO's fifth and sixth science runs and Virgo's first, second, and third science runs. The IPN satellites provide accurate times of the bursts and sky localizations that vary significantly from degree scale to hundreds of square degrees. We search for both a well-modeled binary coalescence signal, the favored progenitor model for short GRBs, and for generic, unmodeled gravitational wave bursts. Both searches use the event time and sky localization to improve the gravitational wave search sensitivity as compared to corresponding all-time, all-sky searches. We find no evidence of a gravitational wave signal associated with any of the IPN GRBs in the sample, nor do we find evidence for a population of weak gravitational wave signals associated with the GRBs. For all IPN-detected GRBs, for which a sufficient duration of quality gravitational wave data are available, we place lower bounds on the distance to the source in accordance with an optimistic assumption of gravitational wave emission energy of 10(exp-2) solar mass c(exp 2) at 150 Hz, and find a median of 13 Mpc. For the 27 short-hard GRBs we place 90% confidence exclusion distances to two source models: a binary neutron star coalescence, with a median distance of 12 Mpc, or the coalescence of a neutron star and black hole, with a median distance of 22 Mpc. Finally, we combine this search with previously published results to provide a population statement for GRB searches in first-generation LIGO and Virgo gravitational wave detectors and a resulting examination of prospects for the advanced gravitational wave detectors.
Search for gravitational waves associated with γ-ray bursts detected by the interplanetary network.
Aasi, J; Abbott, B P; Abbott, R; Abbott, T; Abernathy, M R; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, R X; Affeldt, C; Agathos, M; Aggarwal, N; Aguiar, O D; Ajith, P; Alemic, A; Allen, B; Allocca, A; Amariutei, D; Andersen, M; Anderson, R A; Anderson, S B; Anderson, W G; Arai, K; Araya, M C; Arceneaux, C; Areeda, J S; Ast, S; Aston, S M; Astone, P; Aufmuth, P; Augustus, H; Aulbert, C; Aylott, B E; Babak, S; Baker, P T; Ballardin, G; Ballmer, S W; Barayoga, J C; Barbet, M; Barish, B C; Barker, D; Barone, F; Barr, B; Barsotti, L; Barsuglia, M; Barton, M A; Bartos, I; Bassiri, R; Basti, A; Batch, J C; Bauchrowitz, J; Bauer, Th S; Baune, C; Bavigadda, V; Behnke, B; Bejger, M; Beker, M G; Belczynski, C; Bell, A S; Bell, C; Bergmann, G; Bersanetti, D; Bertolini, A; Betzwieser, J; Bilenko, I A; Billingsley, G; Birch, J; Biscans, S; Bitossi, M; Biwer, C; Bizouard, M A; Black, E; Blackburn, J K; Blackburn, L; Blair, D; Bloemen, S; Bock, O; Bodiya, T P; Boer, M; Bogaert, G; Bogan, C; Bond, C; Bondu, F; Bonelli, L; Bonnand, R; Bork, R; Born, M; Boschi, V; Bose, Sukanta; Bosi, L; Bradaschia, C; Brady, P R; Braginsky, V B; Branchesi, M; Brau, J E; Briant, T; Bridges, D O; Brillet, A; Brinkmann, M; Brisson, V; Brooks, A F; Brown, D A; Brown, D D; Brückner, F; Buchman, S; Buikema, A; Bulik, T; Bulten, H J; Buonanno, A; Burman, R; Buskulic, D; Buy, C; Cadonati, L; Cagnoli, G; Calderón Bustillo, J; Calloni, E; Camp, J B; Campsie, P; Cannon, K C; Canuel, B; Cao, J; Capano, C D; Carbognani, F; Carbone, L; Caride, S; Castaldi, G; Caudill, S; Cavaglià, M; Cavalier, F; Cavalieri, R; Celerier, C; Cella, G; Cepeda, C; Cesarini, E; Chakraborty, R; Chalermsongsak, T; Chamberlin, S J; Chao, S; Charlton, P; Chassande-Mottin, E; Chen, X; Chen, Y; Chincarini, A; Chiummo, A; Cho, H S; Cho, M; Chow, J H; Christensen, N; Chu, Q; Chua, S S Y; Chung, S; Ciani, G; Clara, F; Clark, D E; Clark, J A; Clayton, J H; Cleva, F; Coccia, E; Cohadon, P-F; Colla, A; Collette, C; Colombini, M; Cominsky, L; Constancio, M; Conte, A; Cook, D; Corbitt, T R; Cornish, N; Corsi, A; Costa, C A; Coughlin, M W; Coulon, J-P; Countryman, S; Couvares, P; Coward, D M; Cowart, M J; Coyne, D C; Coyne, R; Craig, K; Creighton, J D E; Croce, R P; Crowder, S G; Cumming, A; Cunningham, L; Cuoco, E; Cutler, C; Dahl, K; Dal Canton, T; Damjanic, M; Danilishin, S L; D'Antonio, S; Danzmann, K; Dattilo, V; Daveloza, H; Davier, M; Davies, G S; Daw, E J; Day, R; Dayanga, T; DeBra, D; Debreczeni, G; Degallaix, J; Deléglise, S; Del Pozzo, W; Denker, T; Dent, T; Dereli, H; Dergachev, V; De Rosa, R; DeRosa, R T; DeSalvo, R; Dhurandhar, S; Díaz, M; Dickson, J; Di Fiore, L; Di Lieto, A; Di Palma, I; Di Virgilio, A; Dolique, V; Dominguez, E; Donovan, F; Dooley, K L; Doravari, S; Douglas, R; Downes, T P; Drago, M; Drever, R W P; Driggers, J C; Du, Z; Ducrot, M; Dwyer, S; Eberle, T; Edo, T; Edwards, M; Effler, A; Eggenstein, H-B; Ehrens, P; Eichholz, J; Eikenberry, S S; Endrőczi, G; Essick, R; Etzel, T; Evans, M; Evans, T; Factourovich, M; Fafone, V; Fairhurst, S; Fan, X; Fang, Q; Farinon, S; Farr, B; Farr, W M; Favata, M; Fazi, D; Fehrmann, H; Fejer, M M; Feldbaum, D; Feroz, F; Ferrante, I; Ferreira, E C; Ferrini, F; Fidecaro, F; Finn, L S; Fiori, I; Fisher, R P; Flaminio, R; Fournier, J-D; Franco, S; Frasca, S; Frasconi, F; Frede, M; Frei, Z; Freise, A; Frey, R; Fricke, T T; Fritschel, P; Frolov, V V; Fulda, P; Fyffe, M; Gair, J R; Gammaitoni, L; Gaonkar, S; Garufi, F; Gehrels, N; Gemme, G; Gendre, B; Genin, E; Gennai, A; Ghosh, S; Giaime, J A; Giardina, K D; Giazotto, A; Gleason, J; Goetz, E; Goetz, R; Gondan, L; González, G; Gordon, N; Gorodetsky, M L; Gossan, S; Goßler, S; Gouaty, R; Gräf, C; Graff, P B; Granata, M; Grant, A; Gras, S; Gray, C; Greenhalgh, R J S; Gretarsson, A M; Groot, P; Grote, H; Grover, K; Grunewald, S; Guidi, G M; Guido, C J; Gushwa, K; Gustafson, E K; Gustafson, R; Ha, J; Hall, E D; Hamilton, W; Hammer, D; Hammond, G; Hanke, M; Hanks, J; Hanna, C; Hannam, M D; Hanson, J; Harms, J; Harry, G M; Harry, I W; Harstad, E D; Hart, M; Hartman, M T; Haster, C-J; Haughian, K; Heidmann, A; Heintze, M; Heitmann, H; Hello, P; Hemming, G; Hendry, M; Heng, I S; Heptonstall, A W; Heurs, M; Hewitson, M; Hild, S; Hoak, D; Hodge, K A; Hofman, D; Holt, K; Hopkins, P; Horrom, T; Hoske, D; Hosken, D J; Hough, J; Howell, E J; Hu, Y; Huerta, E; Hughey, B; Husa, S; Huttner, S H; Huynh, M; Huynh-Dinh, T; Idrisy, A; Ingram, D R; Inta, R; Islas, G; Isogai, T; Ivanov, A; Iyer, B R; Izumi, K; Jacobson, M; Jang, H; Jaranowski, P; Ji, Y; Jiménez-Forteza, F; Johnson, W W; Jones, D I; Jones, R; Jonker, R J G; Ju, L; Haris, K; Kalmus, P; Kalogera, V; Kandhasamy, S; Kang, G; Kanner, J B; Karlen, J; Kasprzack, M; Katsavounidis, E; Katzman, W; Kaufer, H; Kaufer, S; Kaur, T; Kawabe, K; Kawazoe, F; Kéfélian, F; Keiser, G M; Keitel, D; Kelley, D B; Kells, W; Keppel, D G; Khalaidovski, A; Khalili, F Y; Khazanov, E A; Kim, C; Kim, K; Kim, N G; Kim, N; Kim, S; Kim, Y-M; King, E J; King, P J; Kinzel, D L; Kissel, J S; Klimenko, S; Kline, J; Koehlenbeck, S; Kokeyama, K; Kondrashov, V; Koranda, S; Korth, W Z; Kowalska, I; Kozak, D B; Kringel, V; Krishnan, B; Królak, A; Kuehn, G; Kumar, A; Kumar, D Nanda; Kumar, P; Kumar, R; Kuo, L; Kutynia, A; Lam, P K; Landry, M; Lantz, B; Larson, S; Lasky, P D; Lazzarini, A; Lazzaro, C; Leaci, P; Leavey, S; Lebigot, E O; Lee, C H; Lee, H K; Lee, H M; Lee, J; Lee, P J; Leonardi, M; Leong, J R; Leonor, I; Le Roux, A; Leroy, N; Letendre, N; Levin, Y; Levine, B; Lewis, J; Li, T G F; Libbrecht, K; Libson, A; Lin, A C; Littenberg, T B; Lockerbie, N A; Lockett, V; Lodhia, D; Loew, K; Logue, J; Lombardi, A L; Lopez, E; Lorenzini, M; Loriette, V; Lormand, M; Losurdo, G; Lough, J; Lubinski, M J; Lück, H; Lundgren, A P; Ma, Y; Macdonald, E P; MacDonald, T; Machenschalk, B; MacInnis, M; Macleod, D M; Magaña-Sandoval, F; Magee, R; Mageswaran, M; Maglione, C; Mailand, K; Majorana, E; Maksimovic, I; Malvezzi, V; Man, N; Manca, G M; Mandel, I; Mandic, V; Mangano, V; Mangini, N M; Mansell, G; Mantovani, M; Marchesoni, F; Marion, F; Márka, S; Márka, Z; Markosyan, A; Maros, E; Marque, J; Martelli, F; Martin, I W; Martin, R M; Martinelli, L; Martynov, D; Marx, J N; Mason, K; Masserot, A; Massinger, T J; Matichard, F; Matone, L; Mavalvala, N; May, G; Mazumder, N; Mazzolo, G; McCarthy, R; McClelland, D E; McGuire, S C; McIntyre, G; McIver, J; McLin, K; Meacher, D; Meadors, G D; Mehmet, M; Meidam, J; Meinders, M; Melatos, A; Mendell, G; Mercer, R A; Meshkov, S; Messenger, C; Meyer, M S; Meyers, P M; Mezzani, F; Miao, H; Michel, C; Mikhailov, E E; Milano, L; Miller, J; Minenkov, Y; Mingarelli, C M F; Mishra, C; Mitra, S; Mitrofanov, V P; Mitselmakher, G; Mittleman, R; Moe, B; Moggi, A; Mohan, M; Mohapatra, S R P; Moraru, D; Moreno, G; Morgado, N; Morriss, S R; Mossavi, K; Mours, B; Mow-Lowry, C M; Mueller, C L; Mueller, G; Mukherjee, S; Mullavey, A; Munch, J; Murphy, D; Murray, P G; Mytidis, A; Nagy, M F; Nardecchia, I; Naticchioni, L; Nayak, R K; Necula, V; Nelemans, G; Neri, I; Neri, M; Newton, G; Nguyen, T; Nielsen, A B; Nissanke, S; Nitz, A H; Nocera, F; Nolting, D; Normandin, M E N; Nuttall, L K; Ochsner, E; O'Dell, J; Oelker, E; Oh, J J; Oh, S H; Ohme, F; Omar, S; Oppermann, P; Oram, R; O'Reilly, B; Ortega, W; O'Shaughnessy, R; Osthelder, C; Ottaway, D J; Ottens, R S; Overmier, H; Owen, B J; Padilla, C; Pai, A; Palashov, O; Palomba, C; Pan, H; Pan, Y; Pankow, C; Paoletti, F; Papa, M A; Paris, H; Pasqualetti, A; Passaquieti, R; Passuello, D; Pedraza, M; Pele, A; Penn, S; Perreca, A; Phelps, M; Pichot, M; Pickenpack, M; Piergiovanni, F; Pierro, V; Pinard, L; Pinto, I M; Pitkin, M; Poeld, J; Poggiani, R; Poteomkin, A; Powell, J; Prasad, J; Predoi, V; Premachandra, S; Prestegard, T; Price, L R; Prijatelj, M; Privitera, S; Prodi, G A; Prokhorov, L; Puncken, O; Punturo, M; Puppo, P; Pürrer, M; Qin, J; Quetschke, V; Quintero, E; Quitzow-James, R; Raab, F J; Rabeling, D S; Rácz, I; Radkins, H; Raffai, P; Raja, S; Rajalakshmi, G; Rakhmanov, M; Ramet, C; Ramirez, K; Rapagnani, P; Raymond, V; Razzano, M; Re, V; Recchia, S; Reed, C M; Regimbau, T; Reid, S; Reitze, D H; Reula, O; Rhoades, E; Ricci, F; Riesen, R; Riles, K; Robertson, N A; Robinet, F; Rocchi, A; Roddy, S B; Rolland, L; Rollins, J G; Romano, R; Romanov, G; Romie, J H; Rosińska, D; Rowan, S; Rüdiger, A; Ruggi, P; Ryan, K; Salemi, F; Sammut, L; Sandberg, V; Sanders, J R; Sankar, S; Sannibale, V; Santiago-Prieto, I; Saracco, E; Sassolas, B; Sathyaprakash, B S; Saulson, P R; Savage, R; Scheuer, J; Schilling, R; Schilman, M; Schmidt, P; Schnabel, R; Schofield, R M S; Schreiber, E; Schuette, D; Schutz, B F; Scott, J; Scott, S M; Sellers, D; Sengupta, A S; Sentenac, D; Sequino, V; Sergeev, A; Shaddock, D A; Shah, S; Shahriar, M S; Shaltev, M; Shao, Z; Shapiro, B; Shawhan, P; Shoemaker, D H; Sidery, T L; Siellez, K; Siemens, X; Sigg, D; Simakov, D; Singer, A; Singer, L; Singh, R; Sintes, A M; Slagmolen, B J J; Slutsky, J; Smith, J R; Smith, M R; Smith, R J E; Smith-Lefebvre, N D; Son, E J; Sorazu, B; Souradeep, T; Staley, A; Stebbins, J; Steinke, M; Steinlechner, J; Steinlechner, S; Stephens, B C; Steplewski, S; Stevenson, S; Stone, R; Stops, D; Strain, K A; Straniero, N; Strigin, S; Sturani, R; Stuver, A L; Summerscales, T Z; Susmithan, S; Sutton, P J; Swinkels, B; Tacca, M; Talukder, D; Tanner, D B; Tao, J; Tarabrin, S P; Taylor, R; Tellez, G; Thirugnanasambandam, M P; Thomas, M; Thomas, P; Thorne, K A; Thorne, K S; Thrane, E; Tiwari, V; Tokmakov, K V; Tomlinson, C; Tonelli, M; Torres, C V; Torrie, C I; Travasso, F; Traylor, G; Tse, M; Tshilumba, D; Tuennermann, H; Ugolini, D; Unnikrishnan, C S; Urban, A L; Usman, S A; Vahlbruch, H; Vajente, G; Valdes, G; Vallisneri, M; van Beuzekom, M; van den Brand, J F J; Van Den Broeck, C; van der Sluys, M V; van Heijningen, J; van Veggel, A A; Vass, S; Vasúth, M; Vaulin, R; Vecchio, A; Vedovato, G; Veitch, J; Veitch, P J; Venkateswara, K; Verkindt, D; Vetrano, F; Viceré, A; Vincent-Finley, R; Vinet, J-Y; Vitale, S; Vo, T; Vocca, H; Vorvick, C; Vousden, W D; Vyachanin, S P; Wade, A R; Wade, L; Wade, M; Walker, M; Wallace, L; Walsh, S; Wang, M; Wang, X; Ward, R L; Was, M; Weaver, B; Wei, L-W; Weinert, M; Weinstein, A J; Weiss, R; Welborn, T; Wen, L; Wessels, P; West, M; Westphal, T; Wette, K; Whelan, J T; White, D J; Whiting, B F; Wiesner, K; Wilkinson, C; Williams, K; Williams, L; Williams, R; Williams, T D; Williamson, A R; Willis, J L; Willke, B; Wimmer, M; Winkler, W; Wipf, C C; Wiseman, A G; Wittel, H; Woan, G; Wolovick, N; Worden, J; Wu, Y; Yablon, J; Yakushin, I; Yam, W; Yamamoto, H; Yancey, C C; Yang, H; Yoshida, S; Yvert, M; Zadrożny, A; Zanolin, M; Zendri, J-P; Zhang, Fan; Zhang, L; Zhao, C; Zhu, H; Zhu, X J; Zucker, M E; Zuraw, S; Zweizig, J; Aptekar, R L; Atteia, J L; Cline, T; Connaughton, V; Frederiks, D D; Golenetskii, S V; Hurley, K; Krimm, H A; Marisaldi, M; Pal'shin, V D; Palmer, D; Svinkin, D S; Terada, Y; von Kienlin, A
2014-07-04
We present the results of a search for gravitational waves associated with 223 γ-ray bursts (GRBs) detected by the InterPlanetary Network (IPN) in 2005-2010 during LIGO's fifth and sixth science runs and Virgo's first, second, and third science runs. The IPN satellites provide accurate times of the bursts and sky localizations that vary significantly from degree scale to hundreds of square degrees. We search for both a well-modeled binary coalescence signal, the favored progenitor model for short GRBs, and for generic, unmodeled gravitational wave bursts. Both searches use the event time and sky localization to improve the gravitational wave search sensitivity as compared to corresponding all-time, all-sky searches. We find no evidence of a gravitational wave signal associated with any of the IPN GRBs in the sample, nor do we find evidence for a population of weak gravitational wave signals associated with the GRBs. For all IPN-detected GRBs, for which a sufficient duration of quality gravitational wave data are available, we place lower bounds on the distance to the source in accordance with an optimistic assumption of gravitational wave emission energy of 10(-2)M⊙c(2) at 150 Hz, and find a median of 13 Mpc. For the 27 short-hard GRBs we place 90% confidence exclusion distances to two source models: a binary neutron star coalescence, with a median distance of 12 Mpc, or the coalescence of a neutron star and black hole, with a median distance of 22 Mpc. Finally, we combine this search with previously published results to provide a population statement for GRB searches in first-generation LIGO and Virgo gravitational wave detectors and a resulting examination of prospects for the advanced gravitational wave detectors.
2015-01-01
Systematic analysis and interpretation of the large number of tandem mass spectra (MS/MS) obtained in metabolomics experiments is a bottleneck in discovery-driven research. MS/MS mass spectral libraries are small compared to all known small molecule structures and are often not freely available. MS2Analyzer was therefore developed to enable user-defined searches of thousands of spectra for mass spectral features such as neutral losses, m/z differences, and product and precursor ions from MS/MS spectra in MSP/MGF files. The software is freely available at http://fiehnlab.ucdavis.edu/projects/MS2Analyzer/. As the reference query set, 147 literature-reported neutral losses and their corresponding substructures were collected. This set was tested for accuracy of linking neutral loss analysis to substructure annotations using 19 329 accurate mass tandem mass spectra of structurally known compounds from the NIST11 MS/MS library. Validation studies showed that 92.1 ± 6.4% of 13 typical neutral losses such as acetylations, cysteine conjugates, or glycosylations are correct annotating the associated substructures, while the absence of mass spectra features does not necessarily imply the absence of such substructures. Use of this tool has been successfully demonstrated for complex lipids in microalgae. PMID:25263576
SimPhospho: a software tool enabling confident phosphosite assignment.
Suni, Veronika; Suomi, Tomi; Tsubosaka, Tomoya; Imanishi, Susumu Y; Elo, Laura L; Corthals, Garry L
2018-03-27
Mass spectrometry combined with enrichment strategies for phosphorylated peptides has been successfully employed for two decades to identify sites of phosphorylation. However, unambiguous phosphosite assignment is considered challenging. Given that site-specific phosphorylation events function as different molecular switches, validation of phosphorylation sites is of utmost importance. In our earlier study we developed a method based on simulated phosphopeptide spectral libraries, which enables highly sensitive and accurate phosphosite assignments. To promote more widespread use of this method, we here introduce a software implementation with improved usability and performance. We present SimPhospho, a fast and user-friendly tool for accurate simulation of phosphopeptide tandem mass spectra. Simulated phosphopeptide spectral libraries are used to validate and supplement database search results, with a goal to improve reliable phosphoproteome identification and reporting. The presented program can be easily used together with the Trans-Proteomic Pipeline and integrated in a phosphoproteomics data analysis workflow. SimPhospho is available for Windows, Linux and Mac operating systems at https://sourceforge.net/projects/simphospho/. It is open source and implemented in C ++. A user's manual with detailed description of data analysis using SimPhospho as well as test data can be found as supplementary material of this article. Supplementary data are available at https://www.btk.fi/research/ computational-biomedicine/software/.
Kramer, Henk; van Putten, John W G; Douma, W Rob; Smidt, Alie A; van Dullemen, Hendrik M; Groen, Harry J M
2005-02-01
Endoscopic ultrasonography (EUS) is a novel method for staging of the mediastinum in lung cancer patients. The recent development of linear scanners enables safe and accurate fine-needle aspiration (FNA) of mediastinal and upper abdominal structures under real-time ultrasound guidance. However, various methods and equipment for mediastinal EUS-FNA are being used throughout the world, and a detailed description of the procedures is lacking. A thorough description of linear EUS-FNA is needed. A step-by-step description of the linear EUS-FNA procedure as performed in our hospital will be provided. Ultrasonographic landmarks will be shown on images. The procedure will be related to published literature, with a systematic literature search. EUS-FNA is an outpatient procedure under conscious sedation. The typical linear EUS-FNA procedure starts with examination of the retroperitoneal area. After this, systematic scanning of the mediastinum is performed at intervals of 1-2cm. Abnormalities are noted, and FNA of the abnormalities can be performed. Specimens are assessed for cellularity on-site. The entire procedure takes 45-60 min. EUS-FNA is minimally invasive, accurate, and fast. Anatomical areas can be reached that are inaccessible for cervical mediastinoscopy. EUS-FNA is useful for the staging of lung cancer or the assessment and diagnosis of abnormalities in the posterior mediastinum.
Extended behavioural modelling of FET and lattice-mismatched HEMT devices
NASA Astrophysics Data System (ADS)
Khawam, Yahya; Albasha, Lutfi
2017-07-01
This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.
Can We Predict Patient Wait Time?
Pianykh, Oleg S; Rosenthal, Daniel I
2015-10-01
The importance of patient wait-time management and predictability can hardly be overestimated: For most hospitals, it is the patient queues that drive and define every bit of clinical workflow. The objective of this work was to study the predictability of patient wait time and identify its most influential predictors. To solve this problem, we developed a comprehensive list of 25 wait-related parameters, suggested in earlier work and observed in our own experiments. All parameters were chosen as derivable from a typical Hospital Information System dataset. The parameters were fed into several time-predicting models, and the best parameter subsets, discovered through exhaustive model search, were applied to a large sample of actual patient wait data. We were able to discover the most efficient wait-time prediction factors and models, such as the line-size models introduced in this work. Moreover, these models proved to be equally accurate and computationally efficient. Finally, the selected models were implemented in our patient waiting areas, displaying predicted wait times on the monitors located at the front desks. The limitations of these models are also discussed. Optimal regression models based on wait-line sizes can provide accurate and efficient predictions for patient wait time. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Development of Health Information Search Engine Based on Metadata and Ontology
Song, Tae-Min; Jin, Dal-Lae
2014-01-01
Objectives The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Methods Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. Results A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Conclusions Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers. PMID:24872907
Development of health information search engine based on metadata and ontology.
Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae
2014-04-01
The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.
Pols, David H.J.; Bramer, Wichor M.; Bindels, Patrick J.E.; van de Laar, Floris A.; Bohnen, Arthur M.
2015-01-01
Physicians and researchers in the field of family medicine often need to find relevant articles in online medical databases for a variety of reasons. Because a search filter may help improve the efficiency and quality of such searches, we aimed to develop and validate search filters to identify research studies of relevance to family medicine. Using a new and objective method for search filter development, we developed and validated 2 search filters for family medicine. The sensitive filter had a sensitivity of 96.8% and a specificity of 74.9%. The specific filter had a specificity of 97.4% and a sensitivity of 90.3%. Our new filters should aid literature searches in the family medicine field. The sensitive filter may help researchers conducting systematic reviews, whereas the specific filter may help family physicians find answers to clinical questions at the point of care when time is limited. PMID:26195683
Thibault, J. C.; Roe, D. R.; Eilbeck, K.; Cheatham, T. E.; Facelli, J. C.
2015-01-01
Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data – both within the same organization and among different ones – remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations. PMID:26387907
Thibault, J C; Roe, D R; Eilbeck, K; Cheatham, T E; Facelli, J C
2015-01-01
Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data - both within the same organization and among different ones - remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations.
Thompson, Andrew E; Graydon, Sara L
2009-01-01
With continuing use of the Internet, rheumatologists are referring patients to various websites to gain information about medications and diseases. Our goal was to develop and evaluate a Medication Website Assessment Tool (MWAT) for use by health professionals, and to explore the overall quality of methotrexate information presented on common English-language websites. Identification of websites was performed using a search strategy on the search engine Google. The first 250 hits were screened. Inclusion criteria included those English-language websites from authoritative sources, trusted medical, physicians', and common health-related websites. Websites from pharmaceutical companies, online pharmacies, and where the purpose seemed to be primarily advertisements were also included. Product monographs or technical-based web pages and web pages where the information was clearly directed at patients with cancer were excluded. Two reviewers independently scored each included web page for completeness and accuracy, format, readability, reliability, and credibility. An overall ranking was provided for each methotrexate information page. Twenty-eight web pages were included in the analysis. The average score for completeness and accuracy was 15.48+/-3.70 (maximum 24) with 10 out of 28 pages scoring 18 (75%) or higher. The average format score was 6.00+/-1.46 (maximum 8). The Flesch-Kincaid Grade Level revealed an average grade level of 10.07+/-1.84, with 5 out of 28 websites written at a reading level less than grade 8; however, no web page scored at a grade 5 to 6 level. An overall ranking was calculated identifying 8 web pages as appropriate sources of accurate and reliable methotrexate information. With the enormous amount of information available on the Internet, it is important to direct patients to web pages that are complete, accurate, readable, and credible sources of information. We identified web pages that may serve the interests of both rheumatologists and patients.
Efficient protein structure search using indexing methods
2013-01-01
Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively. PMID:23691543
Efficient protein structure search using indexing methods.
Kim, Sungchul; Sael, Lee; Yu, Hwanjo
2013-01-01
Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively.
Kim, Yoonsang; Huang, Jidong; Emery, Sherry
2016-02-26
Social media have transformed the communications landscape. People increasingly obtain news and health information online and via social media. Social media platforms also serve as novel sources of rich observational data for health research (including infodemiology, infoveillance, and digital disease detection detection). While the number of studies using social data is growing rapidly, very few of these studies transparently outline their methods for collecting, filtering, and reporting those data. Keywords and search filters applied to social data form the lens through which researchers may observe what and how people communicate about a given topic. Without a properly focused lens, research conclusions may be biased or misleading. Standards of reporting data sources and quality are needed so that data scientists and consumers of social media research can evaluate and compare methods and findings across studies. We aimed to develop and apply a framework of social media data collection and quality assessment and to propose a reporting standard, which researchers and reviewers may use to evaluate and compare the quality of social data across studies. We propose a conceptual framework consisting of three major steps in collecting social media data: develop, apply, and validate search filters. This framework is based on two criteria: retrieval precision (how much of retrieved data is relevant) and retrieval recall (how much of the relevant data is retrieved). We then discuss two conditions that estimation of retrieval precision and recall rely on--accurate human coding and full data collection--and how to calculate these statistics in cases that deviate from the two ideal conditions. We then apply the framework on a real-world example using approximately 4 million tobacco-related tweets collected from the Twitter firehose. We developed and applied a search filter to retrieve e-cigarette-related tweets from the archive based on three keyword categories: devices, brands, and behavior. The search filter retrieved 82,205 e-cigarette-related tweets from the archive and was validated. Retrieval precision was calculated above 95% in all cases. Retrieval recall was 86% assuming ideal conditions (no human coding errors and full data collection), 75% when unretrieved messages could not be archived, 86% assuming no false negative errors by coders, and 93% allowing both false negative and false positive errors by human coders. This paper sets forth a conceptual framework for the filtering and quality evaluation of social data that addresses several common challenges and moves toward establishing a standard of reporting social data. Researchers should clearly delineate data sources, how data were accessed and collected, and the search filter building process and how retrieval precision and recall were calculated. The proposed framework can be adapted to other public social media platforms.
Data Processing Aspects of MEDLARS
Austin, Charles J.
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files. PMID:14119287
DATA PROCESSING ASPECTS OF MEDLARS.
AUSTIN, C J
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files.
Nonuniformity correction for an infrared focal plane array based on diamond search block matching.
Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian
2016-05-01
In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.
Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.
Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding
2016-01-01
The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method.
Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm
Yang, Zhang; Li, Guo; Weifeng, Ding
2016-01-01
The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428
Forecasting new product diffusion using both patent citation and web search traffic
Lee, Won Sang; Choi, Hyo Shin
2018-01-01
Accurate demand forecasting for new technology products is a key factor in the success of a business. We propose a way to forecasting a new product’s diffusion through technology diffusion and interest diffusion. Technology diffusion and interest diffusion are measured by the volume of patent citations and web search traffic, respectively. We apply the proposed method to forecast the sales of hybrid cars and industrial robots in the US market. The results show that that technology diffusion, as represented by patent citations, can explain long-term sales for hybrid cars and industrial robots. On the other hand, interest diffusion, as represented by web search traffic, can help to improve the predictability of market sales of hybrid cars in the short-term. However, interest diffusion is difficult to explain the sales of industrial robots due to the different market characteristics. Finding indicates our proposed model can relatively well explain the diffusion of consumer goods. PMID:29630616
A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search.
Villagra, Andrea; Alba, Enrique; Leguizamón, Guillermo
2016-01-01
This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology.
Zhao, Wen; Ma, Hong; Zhang, Hua; Jin, Jiang; Dai, Gang; Hu, Lin
2017-01-01
The cognitive radio wireless sensor network (CR-WSN) is experiencing more and more attention for its capacity to automatically extract broadband instantaneous radio environment information. Obtaining sufficient linearity and spurious-free dynamic range (SFDR) is a significant premise of guaranteeing sensing performance which, however, usually suffers from the nonlinear distortion coming from the broadband radio frequency (RF) front-end in the sensor node. Moreover, unlike other existing methods, the joint effect of non-constant group delay distortion and nonlinear distortion is discussed, and its corresponding solution is provided in this paper. After that, the nonlinearity mitigation architecture based on best delay searching is proposed. Finally, verification experiments, both on simulation signals and signals from real-world measurement, are conducted and discussed. The achieved results demonstrate that with best delay searching, nonlinear distortion can be alleviated significantly and, in this way, spectrum sensing performance is more reliable and accurate. PMID:28956860
NASA Astrophysics Data System (ADS)
Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca
2010-05-01
The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.
Accuracy of Binary Black Hole waveforms for Advanced LIGO searches
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela
2015-04-01
Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.
Reporting guidelines in health research: A review.
Simon, Arun K; Rao, Ashwini; Rajesh, Gururaghavendran; Shenoy, Ramya; Pai, Mithun B
2015-01-01
Contemporary health research has come under close scrutiny, exposing alarming flaws in the reporting of research. The reporting guidelines can aid in identification of poorly reported studies and can bring transparency to health research. The guidelines also help journal editors, peer reviewers, funding agencies, and readers to better discern health research. Reporting guidelines encourage accurate and thorough reporting of fundamental aspects of health research so that the results of studies can be replicated by others. Reporting guidelines are potent tools to improve the practice of research and in reducing reporting bias. For the present review, both electronic and manual literature search was carried out. Electronic databases like PubMed, MEDLINE, EBSCO host, and Science Direct were searched for extracting relevant articles. Various key words and their combinations were used for literature search like reporting guidelines, checklist, research, publishing standards, study design, medicine, and dentistry. The search results were scrutinized for relevance to the topic and only full text articles in English were incorporated. Various reporting guidelines were identified and grouped under headings based on study design. This review article attempts to highlight the various reporting guidelines in literature relating to health research, its potential applications, and its limitations.
One- and two-dimensional search of an equation of state using a newly released 2DRoptimize package
NASA Astrophysics Data System (ADS)
Jamal, M.; Reshak, A. H.
2018-05-01
A new package called 2DRoptimize has been released for performing two-dimensional searches of the equation of state (EOS) for rhombohedral, tetragonal, and hexagonal compounds. The package is compatible and available with the WIEN2k package. The 2DRoptimize package performs a convenient volume and c/a structure optimization. First, the package finds the best value for c/a and the associated energy for each volume. In the second step, it calculates the EoS. The package then finds the equation of the c/a ratio vs. volume to calculate the c/a ratio at the optimized volume. In the last stage, by using the optimized volume and c/a ratio, the 2DRoptimize package calculates a and c lattice constants for tetragonal and hexagonal compounds, as well as the a lattice constant with the α angle for rhombohedral compounds. We tested our new package based on several hexagonal, tetragonal, and rhombohedral structures, and the 2D search results for the EOS showed that this method is more accurate than 1D search. Our results agreed very well with the experimental data and they were better than previous theoretical calculations.
Gapped Spectral Dictionaries and Their Applications for Database Searches of Tandem Mass Spectra*
Jeong, Kyowon; Kim, Sangtae; Bandeira, Nuno; Pevzner, Pavel A.
2011-01-01
Generating all plausible de novo interpretations of a peptide tandem mass (MS/MS) spectrum (Spectral Dictionary) and quickly matching them against the database represent a recently emerged alternative approach to peptide identification. However, the sizes of the Spectral Dictionaries quickly grow with the peptide length making their generation impractical for long peptides. We introduce Gapped Spectral Dictionaries (all plausible de novo interpretations with gaps) that can be easily generated for any peptide length thus addressing the limitation of the Spectral Dictionary approach. We show that Gapped Spectral Dictionaries are small thus opening a possibility of using them to speed-up MS/MS searches. Our MS-GappedDictionary algorithm (based on Gapped Spectral Dictionaries) enables proteogenomics applications (such as searches in the six-frame translation of the human genome) that are prohibitively time consuming with existing approaches. MS-GappedDictionary generates gapped peptides that occupy a niche between accurate but short peptide sequence tags and long but inaccurate full length peptide reconstructions. We show that, contrary to conventional wisdom, some high-quality spectra do not have good peptide sequence tags and introduce gapped tags that have advantages over the conventional peptide sequence tags in MS/MS database searches. PMID:21444829
Improving e-book access via a library-developed full-text search tool.
Foust, Jill E; Bergen, Phillip; Maxeiner, Gretchen L; Pawlowski, Peter N
2007-01-01
This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single "Google-style" query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products.
Improving e-book access via a library-developed full-text search tool*
Foust, Jill E.; Bergen, Phillip; Maxeiner, Gretchen L.; Pawlowski, Peter N.
2007-01-01
Purpose: This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. Setting: The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. Brief Description: The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single “Google-style” query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. Results/Evaluation: A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. Conclusion: This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products. PMID:17252065
Using Search Engine Data as a Tool to Predict Syphilis.
Young, Sean D; Torrone, Elizabeth A; Urata, John; Aral, Sevgi O
2018-07-01
Researchers have suggested that social media and online search data might be used to monitor and predict syphilis and other sexually transmitted diseases. Because people at risk for syphilis might seek sexual health and risk-related information on the internet, we investigated associations between internet state-level search query data (e.g., Google Trends) and reported weekly syphilis cases. We obtained weekly counts of reported primary and secondary syphilis for 50 states from 2012 to 2014 from the US Centers for Disease Control and Prevention. We collected weekly internet search query data regarding 25 risk-related keywords from 2012 to 2014 for 50 states using Google Trends. We joined 155 weeks of Google Trends data with 1-week lag to weekly syphilis data for a total of 7750 data points. Using the least absolute shrinkage and selection operator, we trained three linear mixed models on the first 10 weeks of each year. We validated models for 2012 and 2014 for the following 52 weeks and the 2014 model for the following 42 weeks. The models, consisting of different sets of keyword predictors for each year, accurately predicted 144 weeks of primary and secondary syphilis counts for each state, with an overall average R of 0.9 and overall average root mean squared error of 4.9. We used Google Trends search data from the prior week to predict cases of syphilis in the following weeks for each state. Further research could explore how search data could be integrated into public health monitoring systems.