9th Annual Systems Engineering Conference: Volume 4 Thursday
2006-10-26
Connectivity, Speed, Volume • Enterprise application integration • Workflow integration or multi-media • Federated search capability • Link analysis and...categorization, federated search & automated discovery of information — Collaborative tools to quickly share relevant information Built on commercial
JournalMap: Geo-semantic searching for relevant knowledge
USDA-ARS?s Scientific Manuscript database
Ecologists struggling to understand rapidly changing environments and evolving ecosystem threats need quick access to relevant research and documentation of natural systems. The advent of semantic and aggregation searching (e.g., Google Scholar, Web of Science) has made it easier to find useful lite...
Message from the Director | Galaxy of Images
Search! Enter a search term and hit the search button to quickly find an image Go The above "Quick Search" box will find ANY words you type in. Use "*" to truncate a word (dog* will get more precise search, try the "Advanced Search" option below. More search options, including
Real-time scheduling using minimum search
NASA Technical Reports Server (NTRS)
Tadepalli, Prasad; Joshi, Varad
1992-01-01
In this paper we consider a simple model of real-time scheduling. We present a real-time scheduling system called RTS which is based on Korf's Minimin algorithm. Experimental results show that the schedule quality initially improves with the amount of look-ahead search and tapers off quickly. So it sppears that reasonably good schedules can be produced with a relatively shallow search.
RadSearch: a RIS/PACS integrated query tool
NASA Astrophysics Data System (ADS)
Tsao, Sinchai; Documet, Jorge; Moin, Paymann; Wang, Kevin; Liu, Brent J.
2008-03-01
Radiology Information Systems (RIS) contain a wealth of information that can be used for research, education, and practice management. However, the sheer amount of information available makes querying specific data difficult and time consuming. Previous work has shown that a clinical RIS database and its RIS text reports can be extracted, duplicated and indexed for searches while complying with HIPAA and IRB requirements. This project's intent is to provide a software tool, the RadSearch Toolkit, to allow intelligent indexing and parsing of RIS reports for easy yet powerful searches. In addition, the project aims to seamlessly query and retrieve associated images from the Picture Archiving and Communication System (PACS) in situations where an integrated RIS/PACS is in place - even subselecting individual series, such as in an MRI study. RadSearch's application of simple text parsing techniques to index text-based radiology reports will allow the search engine to quickly return relevant results. This powerful combination will be useful in both private practice and academic settings; administrators can easily obtain complex practice management information such as referral patterns; researchers can conduct retrospective studies with specific, multiple criteria; teaching institutions can quickly and effectively create thorough teaching files.
High-performance metadata indexing and search in petascale data storage systems
NASA Astrophysics Data System (ADS)
Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.
2008-07-01
Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.
This site has moved! Please go to our new Image Gallery site! dot header Contact Us About the Image Galaxy For licensing and other usage questions, please contact: Image use and licensing ! Enter a search term and hit the search button to quickly find an image Go The above "Quick Search
ERIC Educational Resources Information Center
Yeh, Her-Tyan; Chen, Bing-Chang; Wang, Bo-Xun
2016-01-01
The current study applied cloud computing technology and smart mobile devices combined with a streaming server for parking lots to plan a city parking integration system. It is also equipped with a parking search system, parking navigation system, parking reservation service, and car retrieval service. With this system, users can quickly find…
This site has moved! Please go to our new Image Gallery site! dot header About the Image Galaxy are added regularly. Statistics about the Galaxy of Images Frequently Asked Questions Image Use Fees Quick Search! Enter a search term and hit the search button to quickly find an image Go The above "
Data-Base Software For Tracking Technological Developments
NASA Technical Reports Server (NTRS)
Aliberti, James A.; Wright, Simon; Monteith, Steve K.
1996-01-01
Technology Tracking System (TechTracS) computer program developed for use in storing and retrieving information on technology and related patent information developed under auspices of NASA Headquarters and NASA's field centers. Contents of data base include multiple scanned still images and quick-time movies as well as text. TechTracS includes word-processing, report-editing, chart-and-graph-editing, and search-editing subprograms. Extensive keyword searching capabilities enable rapid location of technologies, innovators, and companies. System performs routine functions automatically and serves multiple users.
Development of public science archive system of Subaro Telescope. 2
NASA Astrophysics Data System (ADS)
Yamamoto, Naotaka; Noda, Sachiyo; Taga, Masatoshi; Ozawa, Tomohiko; Horaguchi, Toshihiro; Okumura, Shin-Ichiro; Furusho, Reiko; Baba, Hajime; Yagi, Masafumi; Yasuda, Naoki; Takata, Tadafumi; Ichikawa, Shin-Ichi
2003-09-01
We report various improvements in a public science archive system, SMOKA (Subaru-Mitaka-Okayama-Kiso Archive system). We have developed a new interface to search observational data of minor bodies in the solar system. In addition, the other improvements (1) to search frames by specifying wavelength directly, (2) to find out calibration data set automatically, (3) to browse data on weather, humidity, and temperature, which provide information of image quality, (4) to provide quick-look images of OHS/CISCO and IRCS, and (5) to include the data from OAO HIDES (HIgh Dispersion Echelle Spectrograph), are also summarized.
Exploring FlyBase Data Using QuickSearch.
Marygold, Steven J; Antonazzo, Giulia; Attrill, Helen; Costa, Marta; Crosby, Madeline A; Dos Santos, Gilberto; Goodman, Joshua L; Gramates, L Sian; Matthews, Beverley B; Rey, Alix J; Thurmond, Jim
2016-12-08
FlyBase (flybase.org) is the primary online database of genetic, genomic, and functional information about Drosophila species, with a major focus on the model organism Drosophila melanogaster. The long and rich history of Drosophila research, combined with recent surges in genomic-scale and high-throughput technologies, mean that FlyBase now houses a huge quantity of data. Researchers need to be able to rapidly and intuitively query these data, and the QuickSearch tool has been designed to meet these needs. This tool is conveniently located on the FlyBase homepage and is organized into a series of simple tabbed interfaces that cover the major data and annotation classes within the database. This unit describes the functionality of all aspects of the QuickSearch tool. With this knowledge, FlyBase users will be equipped to take full advantage of all QuickSearch features and thereby gain improved access to data relevant to their research. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.
The EBI search engine: EBI search as a service—making biological data accessible for all
Park, Young M.; Squizzato, Silvano; Buso, Nicola; Gur, Tamer
2017-01-01
Abstract We present an update of the EBI Search engine, an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. The interconnectivity that exists between data resources at EMBL–EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types that include nucleotide and protein sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, as well as the life science literature. EBI Search provides a powerful RESTful API that enables its integration into third-party portals, thus providing ‘Search as a Service’ capabilities, which are the main topic of this article. PMID:28472374
The NASA master directory: Quick reference guide
NASA Technical Reports Server (NTRS)
Satin, Karen (Editor); Kanga, Carol (Editor)
1989-01-01
This is a quick reference guide to the NASA Master Directory (MD), which is a free, online, multidisciplinary directory of space and Earth science data sets (NASA and non-NASA data) that are of potential interest to the NASA-sponsored research community. The MD contains high-level descriptions of data sets, other data systems and archives, and campaigns and projects. It provides mechanisms for searching for data sets by important criteria such as geophysical parameters, time, and spatial coverage, and provides information on ordering the data. It also provides automatic connections to a number of data systems such as the NASA Climate Data System, the Planetary Data System, the NASA Ocean Data System, the Pilot Land Data System, and others. The MD includes general information about many data systems, data centers, and coordinated data analysis projects, It represents the first major step in the Catalog Interoperability project, whose objective is to enable researchers to quickly and efficiently identify, obtain information about, and get access to space and Earth science data. The guide describes how to access, use, and exit the MD and lists its features.
Unrelated donor search prognostic score to support early HLA consultation and clinical decisions.
Wadsworth, K; Albrecht, M; Fonstad, R; Spellman, S; Maiers, M; Dehn, J
2016-11-01
A simple scoring system that can provide a quick search prognosis at the onset of an adult unrelated donor (URD) search could be a useful tool for transplant physicians. We aimed to determine whether patient human leukocyte Ag genotype frequency (GF) could be used as a surrogate measure of whether or not a potential 10/10 and/or 9/10 URD in the Be The Match Registry (BTMR) can be identified for the patient. GF was assigned on a training data set of 2410 patients that searched the BTMR using the reported ethnic group. A proportional odds model was used to correlate GF with defined search productivity categories as follows: 'Good' (>2 10/10), 'Fair' (1-2 10/10 or No 10/10 and >2 9/10) or 'Poor' (No 10/10 and <3 9/10). A second cohort (n=2411) was used to calculate the concordance by the ethnic group in all three categories. In addition, we validated against an independent cohort (n=1344) resolved as having a 10/10 or 9/10 matched URD. Across the ethnic groups, >90% of cases with 'Good' GF prognosis, 20-26% 'Fair' and <10% 'Poor' had a 10/10 URD. Although not a replacement for an actual URD search, GF offers a quick way for transplant physicians to get an indication of the likely search outcome.
DIALOGLINK: Shortcuts and Quick Tips.
ERIC Educational Resources Information Center
Koga, James S.
1989-01-01
Describes the use of DIALOGLINK, a searching software for online systems that can be used with microcomputers. Topics discussed include buffer size; multiple copies; screen speedup; print spooler; startup shortcuts; accounting files; type-ahead buffers; and logon macros for use with other online services. (12 references) (LRW)
Development of public science archive system of Subaru Telescope
NASA Astrophysics Data System (ADS)
Baba, Hajime; Yasuda, Naoki; Ichikawa, Shin-Ichi; Yagi, Masafumi; Iwamoto, Nobuyuki; Takata, Tadafumi; Horaguchi, Toshihiro; Taga, Masatochi; Watanabe, Masaru; Okumura, Shin-Ichiro; Ozawa, Tomohiko; Yamamoto, Naotaka; Hamabe, Masaru
2002-09-01
We have developed a public science archive system, Subaru-Mitaka-Okayama-Kiso Archive system (SMOKA), as a successor of Mitaka-Okayama-Kiso Archive (MOKA) system. SMOKA provides an access to the public data of Subaru Telescope, the 188 cm telescope at Okayama Astrophysical Observatory, and the 105 cm Schmidt telescope at Kiso Observatory of the University of Tokyo. Since 1997, we have tried to compile the dictionary of FITS header keywords. The accomplishment of the dictionary enabled us to construct an unified public archive of the data obtained with various instruments at the telescopes. SMOKA has two kinds of user interfaces; Simple Search and Advanced Search. Novices can search data by simply selecting the name of the target with the Simple Search interface. Experts would prefer to set detailed constraints on the query, using the Advanced Search interface. In order to improve the efficiency of searching, several new features are implemented, such as archive status plots, calibration data search, an annotation system, and an improved Quick Look Image browsing system. We can efficiently develop and operate SMOKA by adopting a three-tier model for the system. Java servlets and Java Server Pages (JSP) are useful to separate the front-end presentation from the middle and back-end tiers.
GIS Based System for Post-Earthquake Crisis Managment Using Cellular Network
NASA Astrophysics Data System (ADS)
Raeesi, M.; Sadeghi-Niaraki, A.
2013-09-01
Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the destroyed areas. The search and rescue phase usually is maintained for many days. Time reduction for surviving people is very important. A Geographical Information System (GIS) can be used for decreasing response time and management in critical situations. Position estimation in short period of time time is important. This paper proposes a GIS based system for post-earthquake disaster management solution. This system relies on several mobile positioning methods such as cell-ID and TA method, signal strength method, angel of arrival method, time of arrival method and time difference of arrival method. For quick positioning, the system can be helped by any person who has a mobile device. After positioning and specifying the critical points, the points are sent to a central site for managing the procedure of quick response for helping. This solution establishes a quick way to manage the post-earthquake crisis.
Mirador: A Simple, Fast Search Interface for Remote Sensing Data
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Strub, Richard; Seiler, Edward; Joshi, Talak; MacHarrie, Peter
2008-01-01
A major challenge for remote sensing science researchers is searching and acquiring relevant data files for their research projects based on content, space and time constraints. Several structured query (SQ) and hierarchical navigation (HN) search interfaces have been develop ed to satisfy this requirement, yet the dominant search engines in th e general domain are based on free-text search. The Goddard Earth Sci ences Data and Information Services Center has developed a free-text search interface named Mirador that supports space-time queries, inc luding a gazetteer and geophysical event gazetteer. In order to compe nsate for a slightly reduced search precision relative to SQ and HN t echniques, Mirador uses several search optimizations to return result s quickly. The quick response enables a more iterative search strateg y than is available with many SQ and HN techniques.
The EBI search engine: EBI search as a service-making biological data accessible for all.
Park, Young M; Squizzato, Silvano; Buso, Nicola; Gur, Tamer; Lopez, Rodrigo
2017-07-03
We present an update of the EBI Search engine, an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. The interconnectivity that exists between data resources at EMBL-EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types that include nucleotide and protein sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, as well as the life science literature. EBI Search provides a powerful RESTful API that enables its integration into third-party portals, thus providing 'Search as a Service' capabilities, which are the main topic of this article. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Finding paths in tree graphs with a quantum walk
NASA Astrophysics Data System (ADS)
Koch, Daniel; Hillery, Mark
2018-01-01
We analyze the potential for different types of searches using the formalism of scattering random walks on quantum computers. Given a particular type of graph consisting of nodes and connections, a "tree maze," we would like to find a selected final node as quickly as possible, faster than any classical search algorithm. We show that this can be done using a quantum random walk, both through numerical calculations as well as by using the eigenvectors and eigenvalues of the quantum system.
Automated Text Markup for Information Retrieval from an Electronic Textbook of Infectious Disease
Berrios, Daniel C.; Kehler, Andrew; Kim, David K.; Yu, Victor L.; Fagan, Lawrence M.
1998-01-01
The information needs of practicing clinicians frequently require textbook or journal searches. Making these sources available in electronic form improves the speed of these searches, but precision (i.e., the fraction of relevant to total documents retrieved) remains low. Improving the traditional keyword search by transforming search terms into canonical concepts does not improve search precision greatly. Kim et al. have designed and built a prototype system (MYCIN II) for computer-based information retrieval from a forthcoming electronic textbook of infectious disease. The system requires manual indexing by experts in the form of complex text markup. However, this mark-up process is time consuming (about 3 person-hours to generate, review, and transcribe the index for each of 218 chapters). We have designed and implemented a system to semiautomate the markup process. The system, information extraction for semiautomated indexing of documents (ISAID), uses query models and existing information-extraction tools to provide support for any user, including the author of the source material, to mark up tertiary information sources quickly and accurately.
Zhao, Yinzhi; Zhang, Peng; Guo, Jiming; Li, Xin; Wang, Jinling; Yang, Fei; Wang, Xinzhe
2018-06-20
Due to the great influence of multipath effect, noise, clock and error on pseudorange, the carrier phase double difference equation is widely used in high-precision indoor pseudolite positioning. The initial position is determined mostly by the known point initialization (KPI) method, and then the ambiguities can be fixed with the LAMBDA method. In this paper, a new method without using the KPI to achieve high-precision indoor pseudolite positioning is proposed. The initial coordinates can be quickly obtained to meet the accuracy requirement of the indoor LAMBDA method. The detailed processes of the method follows: Aiming at the low-cost single-frequency pseudolite system, the static differential pseudolite system (DPL) method is used to obtain the low-accuracy positioning coordinates of the rover station quickly. Then, the ambiguity function method (AFM) is used to search for the coordinates in the corresponding epoch. The real coordinates obtained by AFM can meet the initial accuracy requirement of the LAMBDA method, so that the double difference carrier phase ambiguities can be correctly fixed. Following the above steps, high-precision indoor pseudolite positioning can be realized. Several experiments, including static and dynamic tests, are conducted to verify the feasibility of the new method. According to the results of the experiments, the initial coordinates with the accuracy of decimeter level through the DPL can be obtained. For the AFM part, both a one-meter search scope and two-centimeter or four-centimeter search steps are used to ensure the precision at the centimeter level and high search efficiency. After dealing with the problem of multiple peaks caused by the ambiguity cosine function, the coordinate information of the maximum ambiguity function value (AFV) is taken as the initial value of the LAMBDA, and the ambiguities can be fixed quickly. The new method provides accuracies at the centimeter level for dynamic experiments and at the millimeter level for static ones.
NASA Astrophysics Data System (ADS)
Alipova, K. A.; Bart, A. A.; Fazliev, A. Z.; Gordov, E. P.; Okladnikov, I. G.; Privezentsev, A. I.; Titov, A. G.
2017-11-01
The first version of a primitive OWL-ontology of collections climate and meteorological data of Institute of Monitoring of Climatic and Ecological Systems SB RAS is presented. The ontology is a component of expert and decision support systems intended for quick search for climate and meteorological data required for solution of a certain class of applied problems.
NASA Astrophysics Data System (ADS)
Park, Jun-Hyoung; Song, Mi-Young; Plasma Fundamental Technology Research Team
2015-09-01
Plasma databases are necessarily required to compute the plasma parameters and high reliable databases are closely related with accuracy enhancement of simulations. Therefore, a major concern of plasma properties collection and evaluation system is to create a sustainable and useful research environment for plasma data. The system has a commitment to provide not only numerical data but also bibliographic data (including DOI information). Originally, our collection data methods were done by manual data search. In some cases, it took a long time to find data. We will be find data more automatically and quickly than legacy methods by crawling or search engine such as Lucene.
Automated document analysis system
NASA Astrophysics Data System (ADS)
Black, Jeffrey D.; Dietzel, Robert; Hartnett, David
2002-08-01
A software application has been developed to aid law enforcement and government intelligence gathering organizations in the translation and analysis of foreign language documents with potential intelligence content. The Automated Document Analysis System (ADAS) provides the capability to search (data or text mine) documents in English and the most commonly encountered foreign languages, including Arabic. Hardcopy documents are scanned by a high-speed scanner and are optical character recognized (OCR). Documents obtained in an electronic format bypass the OCR and are copied directly to a working directory. For translation and analysis, the script and the language of the documents are first determined. If the document is not in English, the document is machine translated to English. The documents are searched for keywords and key features in either the native language or translated English. The user can quickly review the document to determine if it has any intelligence content and whether detailed, verbatim human translation is required. The documents and document content are cataloged for potential future analysis. The system allows non-linguists to evaluate foreign language documents and allows for the quick analysis of a large quantity of documents. All document processing can be performed manually or automatically on a single document or a batch of documents.
Search Toggle Fermilab Navbar Toggle Search Search Home Contact Phone Book Fermilab at Work Jobs Science Security, Privacy, Legal Use of Cookies Quick Links Home Contact Phone Book Fermilab at Work For
Architecture for knowledge-based and federated search of online clinical evidence.
Coiera, Enrico; Walther, Martin; Nguyen, Ken; Lovell, Nigel H
2005-10-24
It is increasingly difficult for clinicians to keep up-to-date with the rapidly growing biomedical literature. Online evidence retrieval methods are now seen as a core tool to support evidence-based health practice. However, standard search engine technology is not designed to manage the many different types of evidence sources that are available or to handle the very different information needs of various clinical groups, who often work in widely different settings. The objectives of this paper are (1) to describe the design considerations and system architecture of a wrapper-mediator approach to federate search system design, including the use of knowledge-based, meta-search filters, and (2) to analyze the implications of system design choices on performance measurements. A trial was performed to evaluate the technical performance of a federated evidence retrieval system, which provided access to eight distinct online resources, including e-journals, PubMed, and electronic guidelines. The Quick Clinical system architecture utilized a universal query language to reformulate queries internally and utilized meta-search filters to optimize search strategies across resources. We recruited 227 family physicians from across Australia who used the system to retrieve evidence in a routine clinical setting over a 4-week period. The total search time for a query was recorded, along with the duration of individual queries sent to different online resources. Clinicians performed 1662 searches over the trial. The average search duration was 4.9 +/- 3.2 s (N = 1662 searches). Mean search duration to the individual sources was between 0.05 s and 4.55 s. Average system time (ie, system overhead) was 0.12 s. The relatively small system overhead compared to the average time it takes to perform a search for an individual source shows that the system achieves a good trade-off between performance and reliability. Furthermore, despite the additional effort required to incorporate the capabilities of each individual source (to improve the quality of search results), system maintenance requires only a small additional overhead.
Defense AT&L (Volume 34, Number 4, July-August 2005)
2005-08-01
government, industry, and academic communities. The system provides a single site where individuals and organizations can quickly access and search...made specifically about Navy TechMatch, the design, human interface, and system operation of DoD TechMatch are identical . Anyone can view, sort, and...causes another that subsequently supports the first]. Industry, academic , and DoD partners will benefit from the TechMatch concept. Tailored information
The System for Quick Search of the Astronomical Objects and Events in the Digital Plate Archives.
NASA Astrophysics Data System (ADS)
Sergeev, A. V.; Sergeeva, T. P.
From the middle of the XIX century observatories all over the world have accumulated about three millions astronomical plates contained the unique information about the Universe which can not be obtained or restored with the help of any newest facilities and technologies but may be useful for many modern astronomical investigations. The threat of astronomical plate archives loss caused by economical, technical or some other causes have put before world astronomical community a problem: the preservation of the unique information kept on those plates. The problem can be solved by transformation of the information from plates to digital form and keeping it on electronic data medium. We began a creation of a system for quick search and analysing of astronomical events and objects in digital plate archive of the Ukrainian Main astronomical observatory of NAS. Connection of the system to Internet will allow a remote user (astronomer or observer) to have access to digital plate archive and to work with it. For providing of the high efficiency of this work the plate database (list of the plates with all information about them and access software) are preparing. Modular structure of the system basic software and standard format of the plate image files allow future development of problem-oriented software for special astronomical researches.
Standardization of Keyword Search Mode
ERIC Educational Resources Information Center
Su, Di
2010-01-01
In spite of its popularity, keyword search mode has not been standardized. Though information professionals are quick to adapt to various presentations of keyword search mode, novice end-users may find keyword search confusing. This article compares keyword search mode in some major reference databases and calls for standardization. (Contains 3…
Duffy, Steven; de Kock, Shelley; Misso, Kate; Noake, Caro; Ross, Janine; Stirk, Lisa
2016-10-01
The research investigated whether conducting a supplementary search of PubMed in addition to the main MEDLINE (Ovid) search for a systematic review is worthwhile and to ascertain whether this PubMed search can be conducted quickly and if it retrieves unique, recently published, and ahead-of-print studies that are subsequently considered for inclusion in the final systematic review. Searches of PubMed were conducted after MEDLINE (Ovid) and MEDLINE In-Process (Ovid) searches had been completed for seven recent reviews. The searches were limited to records not in MEDLINE or MEDLINE In-Process (Ovid). Additional unique records were identified for all of the investigated reviews. Search strategies were adapted quickly to run in PubMed, and reviewer screening of the results was not time consuming. For each of the investigated reviews, studies were ordered for full screening; in six cases, studies retrieved from the supplementary PubMed searches were included in the final systematic review. Supplementary searching of PubMed for studies unavailable elsewhere is worthwhile and improves the currency of the systematic reviews.
Duffy, Steven; de Kock, Shelley; Misso, Kate; Noake, Caro; Ross, Janine; Stirk, Lisa
2016-01-01
Objective The research investigated whether conducting a supplementary search of PubMed in addition to the main MEDLINE (Ovid) search for a systematic review is worthwhile and to ascertain whether this PubMed search can be conducted quickly and if it retrieves unique, recently published, and ahead-of-print studies that are subsequently considered for inclusion in the final systematic review. Methods Searches of PubMed were conducted after MEDLINE (Ovid) and MEDLINE In-Process (Ovid) searches had been completed for seven recent reviews. The searches were limited to records not in MEDLINE or MEDLINE In-Process (Ovid). Results Additional unique records were identified for all of the investigated reviews. Search strategies were adapted quickly to run in PubMed, and reviewer screening of the results was not time consuming. For each of the investigated reviews, studies were ordered for full screening; in six cases, studies retrieved from the supplementary PubMed searches were included in the final systematic review. Conclusion Supplementary searching of PubMed for studies unavailable elsewhere is worthwhile and improves the currency of the systematic reviews. PMID:27822154
The Charlie Sheen Effect on Rapid In-home Human Immunodeficiency Virus Test Sales.
Allem, Jon-Patrick; Leas, Eric C; Caputi, Theodore L; Dredze, Mark; Althouse, Benjamin M; Noar, Seth M; Ayers, John W
2017-07-01
One in eight of the 1.2 million Americans living with human immunodeficiency virus (HIV) are unaware of their positive status, and untested individuals are responsible for most new infections. As a result, testing is the most cost-effective HIV prevention strategy and must be accelerated when opportunities are presented. Web searches for HIV spiked around actor Charlie Sheen's HIV-positive disclosure. However, it is unknown whether Sheen's disclosure impacted offline behaviors like HIV testing. The goal of this study was to determine if Sheen's HIV disclosure was a record-setting HIV prevention event and determine if Web searches presage increases in testing allowing for rapid detection and reaction in the future. Sales of OraQuick rapid in-home HIV test kits in the USA were monitored weekly from April 12, 2014, to April 16, 2016, alongside Web searches including the terms "test," "tests," or "testing" and "HIV" as accessed from Google Trends. Changes in OraQuick sales around Sheen's disclosure and prediction models using Web searches were assessed. OraQuick sales rose 95% (95% CI, 75-117; p < 0.001) of the week of Sheen's disclosure and remained elevated for 4 more weeks (p < 0.05). In total, there were 8225 more sales than expected around Sheen's disclosure, surpassing World AIDS Day by a factor of about 7. Moreover, Web searches mirrored OraQuick sales trends (r = 0.79), demonstrating their ability to presage increases in testing. The "Charlie Sheen effect" represents an important opportunity for a public health response, and in the future, Web searches can be used to detect and act on more opportunities to foster prevention behaviors.
Foster, E; Hawkins, A; Delve, J; Adamson, A J
2014-01-01
Self-Completed Recall and Analysis of Nutrition (scran24) is a prototype computerised 24-h recall system for use with 11-16 year olds. It is based on the Multiple Pass 24-h Recall method and includes prompts and checks throughout the system for forgotten food items. The development of scran24 was informed by an extensive literature review, a series of focus groups and usability testing. The first stage of the recall is a quick list where the user is asked to input all the foods and drinks they remember consuming the previous day. The quick list is structured into meals and snacks. Once the quick list is complete, additional information is collected on each food to determine food type and to obtain an estimate of portion size using digital images of food. Foods are located within the system using a free text search, which is linked to the information entered into the quick list. A time is assigned to each eating occasion using drag and drop onto a timeline. The system prompts the user if no foods or drinks have been consumed within a 3-h time frame, or if fewer than three drinks have been consumed throughout the day. The food composition code and weight (g) of all items selected are automatically allocated and stored. Nutritional information can be generated automatically via the scran24 companion Access database. scran24 was very well received by young people and was relatively quick to complete. The accuracy and precision was close to that of similar computer-based systems currently used in dietary studies. © 2013 The Authors Journal of Human Nutrition and Dietetics © 2013 The British Dietetic Association Ltd.
... for Educators Search English Español How Do Asthma Medicines Work? KidsHealth / For Kids / How Do Asthma Medicines ... long-term control medicines . What Are Quick-Relief Medicines? Quick-relief medicines (also called rescue or fast- ...
Phylogenetic search through partial tree mixing
2012-01-01
Background Recent advances in sequencing technology have created large data sets upon which phylogenetic inference can be performed. Current research is limited by the prohibitive time necessary to perform tree search on a reasonable number of individuals. This research develops new phylogenetic algorithms that can operate on tens of thousands of species in a reasonable amount of time through several innovative search techniques. Results When compared to popular phylogenetic search algorithms, better trees are found much more quickly for large data sets. These algorithms are incorporated in the PSODA application available at http://dna.cs.byu.edu/psoda Conclusions The use of Partial Tree Mixing in a partition based tree space allows the algorithm to quickly converge on near optimal tree regions. These regions can then be searched in a methodical way to determine the overall optimal phylogenetic solution. PMID:23320449
Detection of contraband concealed on the body using x-ray imaging
NASA Astrophysics Data System (ADS)
Smith, Gerald J.
1997-01-01
In an effort to avoid detection, smugglers and terrorists are increasingly using the body as a vehicle for transporting illicit drugs, weapons, and explosives. This trend illustrates the natural tendency of traffickers to seek the path of least resistance, as improved interdiction technology and operational effectiveness have been brought to bear on other trafficking avenues such as luggage, cargo, and parcels. In response, improved technology for human inspection is being developed using a variety of techniques. ASE's BodySearch X-ray Inspection Systems uses backscatter x-ray imaging of the human body to quickly, safely, and effectively screen for drugs, weapons, and explosives concealed on the body. This paper reviews the law enforcement and social issues involved in human inspections, and briefly describes the ASE BodySearch systems. Operator training, x-ray image interpretation, and maximizing systems effectiveness are also discussed. Finally, data collected from operation of the BodySearch system in the field is presented, and new law enforcement initiatives which have come about due to recent events are reviewed.
TESS Data Processing and Quick-look Pipeline
NASA Astrophysics Data System (ADS)
Fausnaugh, Michael; Huang, Xu; Glidden, Ana; Guerrero, Natalia; TESS Science Office
2018-01-01
We describe the data analysis procedures and pipelines for the Transiting Exoplanet Survey Satellite (TESS). We briefly review the processing pipeline developed and implemented by the Science Processing Operations Center (SPOC) at NASA Ames, including pixel/full-frame image calibration, photometric analysis, pre-search data conditioning, transiting planet search, and data validation. We also describe data-quality diagnostic analyses and photometric performance assessment tests. Finally, we detail a "quick-look pipeline" (QLP) that has been developed by the MIT branch of the TESS Science Office (TSO) to provide a fast and adaptable routine to search for planet candidates in the 30 minute full-frame images.
Kennedy, Carol A; Beaton, Dorcas E; Smith, Peter; Van Eerd, Dwayne; Tang, Kenneth; Inrig, Taucha; Hogg-Johnson, Sheilah; Linton, Denise; Couban, Rachel
2013-11-01
To identify and synthesize evidence for the measurement properties of the QuickDASH, a shortened version of the 30-item DASH (Disabilities of the Arm, Shoulder and Hand) instrument. This systematic review used a best evidence synthesis approach to critically appraise the measurement properties [using COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN)] of the QuickDASH and cross-cultural adaptations. A standard search strategy was conducted between 2005 (year of first publication of QuickDASH) and March 2011 in MEDLINE, EMBASE and CINAHL. The search identified 14 studies to include in the best evidence synthesis of the QuickDASH. A further 11 studies were identified on eight cross-cultural adaptation versions. Many measurement properties of the QuickDASH have been evaluated in multiple studies and across most of the measurement properties. The best evidence synthesis of the QuickDASH English version suggests that this tool is performing well with strong positive evidence for reliability and validity (hypothesis testing), and moderate positive evidence for structural validity testing. Strong negative evidence was found for responsiveness due to lower correlations with global estimates of change. Information about the measurement properties of the cross-cultural adaptation versions is still lacking, or the available information is of poor overall methodological quality.
Environmental Information Management For Data Discovery and Access System
NASA Astrophysics Data System (ADS)
Giriprakash, P.
2011-01-01
Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury was developed during 2007 and released in early 2008. This new version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow ! the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.
ERIC Educational Resources Information Center
Bowling, William D.
2013-01-01
Post-secondary education is quickly becoming a requirement for many growing careers. Because of this, an increased focused on post-secondary enrollment and attainment has been seen in the education community, particularly in the K-12 systems. To that end a large number of programs and organizations have begun to provide assistance to these…
Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Gur, Tamer; Cowley, Andrew; Li, Weizhong; Uludag, Mahmut; Pundir, Sangya; Cham, Jennifer A; McWilliam, Hamish; Lopez, Rodrigo
2015-07-01
The European Bioinformatics Institute (EMBL-EBI-https://www.ebi.ac.uk) provides free and unrestricted access to data across all major areas of biology and biomedicine. Searching and extracting knowledge across these domains requires a fast and scalable solution that addresses the requirements of domain experts as well as casual users. We present the EBI Search engine, referred to here as 'EBI Search', an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. API integration provides access to analytical tools, allowing users to further investigate the results of their search. The interconnectivity that exists between data resources at EMBL-EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types including sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, together with relevant life science literature. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Architecture for Knowledge-Based and Federated Search of Online Clinical Evidence
Walther, Martin; Nguyen, Ken; Lovell, Nigel H
2005-01-01
Background It is increasingly difficult for clinicians to keep up-to-date with the rapidly growing biomedical literature. Online evidence retrieval methods are now seen as a core tool to support evidence-based health practice. However, standard search engine technology is not designed to manage the many different types of evidence sources that are available or to handle the very different information needs of various clinical groups, who often work in widely different settings. Objectives The objectives of this paper are (1) to describe the design considerations and system architecture of a wrapper-mediator approach to federate search system design, including the use of knowledge-based, meta-search filters, and (2) to analyze the implications of system design choices on performance measurements. Methods A trial was performed to evaluate the technical performance of a federated evidence retrieval system, which provided access to eight distinct online resources, including e-journals, PubMed, and electronic guidelines. The Quick Clinical system architecture utilized a universal query language to reformulate queries internally and utilized meta-search filters to optimize search strategies across resources. We recruited 227 family physicians from across Australia who used the system to retrieve evidence in a routine clinical setting over a 4-week period. The total search time for a query was recorded, along with the duration of individual queries sent to different online resources. Results Clinicians performed 1662 searches over the trial. The average search duration was 4.9 ± 3.2 s (N = 1662 searches). Mean search duration to the individual sources was between 0.05 s and 4.55 s. Average system time (ie, system overhead) was 0.12 s. Conclusions The relatively small system overhead compared to the average time it takes to perform a search for an individual source shows that the system achieves a good trade-off between performance and reliability. Furthermore, despite the additional effort required to incorporate the capabilities of each individual source (to improve the quality of search results), system maintenance requires only a small additional overhead. PMID:16403716
Just-in-Time Web Searches for Trainers & Adult Educators.
ERIC Educational Resources Information Center
Kirk, James J.
Trainers and adult educators often need to quickly locate quality information on the World Wide Web (WWW) and need assistance in searching for such information. A "search engine" is an application used to query existing information on the WWW. The three types of search engines are computer-generated indexes, directories, and meta search…
Complex dynamics of our economic life on different scales: insights from search engine query data.
Preis, Tobias; Reith, Daniel; Stanley, H Eugene
2010-12-28
Search engine query data deliver insight into the behaviour of individuals who are the smallest possible scale of our economic life. Individuals are submitting several hundred million search engine queries around the world each day. We study weekly search volume data for various search terms from 2004 to 2010 that are offered by the search engine Google for scientific use, providing information about our economic life on an aggregated collective level. We ask the question whether there is a link between search volume data and financial market fluctuations on a weekly time scale. Both collective 'swarm intelligence' of Internet users and the group of financial market participants can be regarded as a complex system of many interacting subunits that react quickly to external changes. We find clear evidence that weekly transaction volumes of S&P 500 companies are correlated with weekly search volume of corresponding company names. Furthermore, we apply a recently introduced method for quantifying complex correlations in time series with which we find a clear tendency that search volume time series and transaction volume time series show recurring patterns.
Prior Conceptual Knowledge and Textbook Search.
ERIC Educational Resources Information Center
Byrnes, James P.; Guthrie, John T.
1992-01-01
The role of a subject's conceptual knowledge in the procedural task of searching a text for information was studied for 51 college undergraduates in 2 experiments involving knowledge of anatomy. Students with more anatomical information were able to search a text more quickly. Educational implications are discussed. (SLD)
Database Searching by Managers.
ERIC Educational Resources Information Center
Arnold, Stephen E.
Managers and executives need the easy and quick access to business and management information that online databases can provide, but many have difficulty articulating their search needs to an intermediary. One possible solution would be to encourage managers and their immediate support staff members to search textual databases directly as they now…
NASA Technical Reports Server (NTRS)
McGreevy, Michael W.; Connors, Mary M. (Technical Monitor)
2001-01-01
To support Search Requests and Quick Responses at the Aviation Safety Reporting System (ASRS), four new QUORUM methods have been developed: keyword search, phrase search, phrase generation, and phrase discovery. These methods build upon the core QUORUM methods of text analysis, modeling, and relevance-ranking. QUORUM keyword search retrieves ASRS incident narratives that contain one or more user-specified keywords in typical or selected contexts, and ranks the narratives on their relevance to the keywords in context. QUORUM phrase search retrieves narratives that contain one or more user-specified phrases, and ranks the narratives on their relevance to the phrases. QUORUM phrase generation produces a list of phrases from the ASRS database that contain a user-specified word or phrase. QUORUM phrase discovery finds phrases that are related to topics of interest. Phrase generation and phrase discovery are particularly useful for finding query phrases for input to QUORUM phrase search. The presentation of the new QUORUM methods includes: a brief review of the underlying core QUORUM methods; an overview of the new methods; numerous, concrete examples of ASRS database searches using the new methods; discussion of related methods; and, in the appendices, detailed descriptions of the new methods.
BioEve Search: A Novel Framework to Facilitate Interactive Literature Search
Ahmed, Syed Toufeeq; Davulcu, Hasan; Tikves, Sukru; Nair, Radhika; Zhao, Zhongming
2012-01-01
Background. Recent advances in computational and biological methods in last two decades have remarkably changed the scale of biomedical research and with it began the unprecedented growth in both the production of biomedical data and amount of published literature discussing it. An automated extraction system coupled with a cognitive search and navigation service over these document collections would not only save time and effort, but also pave the way to discover hitherto unknown information implicitly conveyed in the texts. Results. We developed a novel framework (named “BioEve”) that seamlessly integrates Faceted Search (Information Retrieval) with Information Extraction module to provide an interactive search experience for the researchers in life sciences. It enables guided step-by-step search query refinement, by suggesting concepts and entities (like genes, drugs, and diseases) to quickly filter and modify search direction, and thereby facilitating an enriched paradigm where user can discover related concepts and keywords to search while information seeking. Conclusions. The BioEve Search framework makes it easier to enable scalable interactive search over large collection of textual articles and to discover knowledge hidden in thousands of biomedical literature articles with ease. PMID:22693501
The Role of Prediction In Perception: Evidence From Interrupted Visual Search
Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro
2014-01-01
Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440
Hypertext-based design of a user interface for scheduling
NASA Technical Reports Server (NTRS)
Woerner, Irene W.; Biefeld, Eric
1993-01-01
Operations Mission Planner (OMP) is an ongoing research project at JPL that utilizes AI techniques to create an intelligent, automated planning and scheduling system. The information space reflects the complexity and diversity of tasks necessary in most real-world scheduling problems. Thus the problem of the user interface is to present as much information as possible at a given moment and allow the user to quickly navigate through the various types of displays. This paper describes a design which applies the hypertext model to solve these user interface problems. The general paradigm is to provide maps and search queries to allow the user to quickly find an interesting conflict or problem, and then allow the user to navigate through the displays in a hypertext fashion.
Magrabi, Farah; Westbrook, Johanna I; Coiera, Enrico W
2007-10-01
Information retrieval systems have the potential to improve patient care but little is known about the variables which influence clinicians' uptake and use of systems in routine work. To determine which factors influenced use of an online evidence retrieval system. Computer logs and pre- and post-system survey analysis of a 4-week clinical trial of the Quick Clinical online evidence system involving 227 general practitioners across Australia. Online evidence use was not linked to general practice training or clinical experience but female clinicians conducted more searches than their male counterparts (mean use=14.38 searches, S.D.=11.68 versus mean use=8.50 searches, S.D.=9.99; t=2.67, d.f.=157, P=0.008). Practice characteristics such as hours worked, type and geographic location of clinic were not associated with search activity. Information seeking was also not related to participants' perceived information needs, computer skills, training nor Internet connection speed. Clinicians who reported direct improvements in patient care as a result of system use had significantly higher rates of system use than other users (mean use=12.55 searches, S.D.=13.18 versus mean use=8.15 searches, S.D.=9.18; t=2.322, d.f.=154 P=0.022). Comparison of participants' views pre- and post- the trial, showed that post-trial clinicians expressed more positive views about searching for information during a consultation (chi(2)=27.40, d.f.=4, P< or =0.001) and a significantly greater number reported seeking information between consultations as a result of having access to an online evidence system in their consulting rooms (chi(2)=9.818, d.f.=2, P=0.010). Clinicians' use of an online evidence system was directly related to their reported experiences of improvements in patient care. Post-trial clinicians positively changed their views about having time to search for information and pursued more questions during clinic hours.
Large-scale feature searches of collections of medical imagery
NASA Astrophysics Data System (ADS)
Hedgcock, Marcus W.; Karshat, Walter B.; Levitt, Tod S.; Vosky, D. N.
1993-09-01
Large scale feature searches of accumulated collections of medical imagery are required for multiple purposes, including clinical studies, administrative planning, epidemiology, teaching, quality improvement, and research. To perform a feature search of large collections of medical imagery, one can either search text descriptors of the imagery in the collection (usually the interpretation), or (if the imagery is in digital format) the imagery itself. At our institution, text interpretations of medical imagery are all available in our VA Hospital Information System. These are downloaded daily into an off-line computer. The text descriptors of most medical imagery are usually formatted as free text, and so require a user friendly database search tool to make searches quick and easy for any user to design and execute. We are tailoring such a database search tool (Liveview), developed by one of the authors (Karshat). To further facilitate search construction, we are constructing (from our accumulated interpretation data) a dictionary of medical and radiological terms and synonyms. If the imagery database is digital, the imagery which the search discovers is easily retrieved from the computer archive. We describe our database search user interface, with examples, and compare the efficacy of computer assisted imagery searches from a clinical text database with manual searches. Our initial work on direct feature searches of digital medical imagery is outlined.
SBA - Dynamic Small Business Search
Mobile View Print Exit Help DSBS Quick Market Search TM OnLine DSBS Welcome to the Dynamic Small Business have "tooltips" with data format information. Search Guidance New! NEW FEATURES FOR MOBILE USERS: Phone number hotlinks can be used to dial the number on mobile phones. Address hotlinks can be
Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Weihong; Sun, Kai; Qi, Junjian
2015-01-01
Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less
Making Temporal Search More Central in Spatial Data Infrastructures
NASA Astrophysics Data System (ADS)
Corti, P.; Lewis, B.
2017-10-01
A temporally enabled Spatial Data Infrastructure (SDI) is a framework of geospatial data, metadata, users, and tools intended to provide an efficient and flexible way to use spatial information which includes the historical dimension. One of the key software components of an SDI is the catalogue service which is needed to discover, query, and manage the metadata. A search engine is a software system capable of supporting fast and reliable search, which may use any means necessary to get users to the resources they need quickly and efficiently. These techniques may include features such as full text search, natural language processing, weighted results, temporal search based on enrichment, visualization of patterns in distributions of results in time and space using temporal and spatial faceting, and many others. In this paper we will focus on the temporal aspects of search which include temporal enrichment using a time miner - a software engine able to search for date components within a larger block of text, the storage of time ranges in the search engine, handling historical dates, and the use of temporal histograms in the user interface to display the temporal distribution of search results.
Block Architecture Problem with Depth First Search Solution and Its Application
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Abdullah, Dahlan; Simarmata, Janner; Pranolo, Andri; Saleh Ahmar, Ansari; Hidayat, Rahmat; Napitupulu, Darmawan; Nurdiyanto, Heri; Febriadi, Bayu; Zamzami, Z.
2018-01-01
Searching is a common process performed by many computer users, Raita algorithm is one algorithm that can be used to match and find information in accordance with the patterns entered. Raita algorithm applied to the file search application using java programming language and the results obtained from the testing process of the file search quickly and with accurate results and support many data types.
Searching Process with Raita Algorithm and its Application
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Saleh Ahmar, Ansari; Abdullah, Dahlan; Hartama, Dedy; Napitupulu, Darmawan; Putera Utama Siahaan, Andysah; Hasan Siregar, Muhammad Noor; Nasution, Nurliana; Sundari, Siti; Sriadhi, S.
2018-04-01
Searching is a common process performed by many computer users, Raita algorithm is one algorithm that can be used to match and find information in accordance with the patterns entered. Raita algorithm applied to the file search application using java programming language and the results obtained from the testing process of the file search quickly and with accurate results and support many data types.
OneSearch Gives You Access to More Than 7,000 Publishers and Content Providers | Poster
By Robin Meckley, Contributing Writer OneSearch, an exciting new resource from the Scientific Library, is now available to the NCI at Frederick community. This new resource provides a quick and easy way to search multiple Scientific Library resources and collections using a single search box for journal articles, books, media, and more. A large central index is compiled from
Massive problem reports mining and analysis based parallelism for similar search
NASA Astrophysics Data System (ADS)
Zhou, Ya; Hu, Cailin; Xiong, Han; Wei, Xiafei; Li, Ling
2017-05-01
Massive problem reports and solutions accumulated over time and continuously collected in XML Spreadsheet (XMLSS) format from enterprises and organizations, which record a series of comprehensive description about problems that can help technicians to trace problems and their solutions. It's a significant and challenging issue to effectively manage and analyze these massive semi-structured data to provide similar problem solutions, decisions of immediate problem and assisting product optimization for users during hardware and software maintenance. For this purpose, we build a data management system to manage, mine and analyze these data search results that can be categorized and organized into several categories for users to quickly find out where their interesting results locate. Experiment results demonstrate that this system is better than traditional centralized management system on the performance and the adaptive capability of heterogeneous data greatly. Besides, because of re-extracting topics, it enables each cluster to be described more precise and reasonable.
Search space mapping: getting a picture of coherent laser control.
Shane, Janelle C; Lozovoy, Vadim V; Dantus, Marcos
2006-10-12
Search space mapping is a method for quickly visualizing the experimental parameters that can affect the outcome of a coherent control experiment. We demonstrate experimental search space mapping for the selective fragmentation and ionization of para-nitrotoluene and show how this method allows us to gather information about the dominant trends behind our achieved control.
Navigation interface for recommending home medical products.
Luo, Gang
2012-04-01
Based on users' health issues, an intelligent personal health record (iPHR) system can automatically recommend home medical products (HMPs) and display them in a sequential order. However, the sequential output interface does not categorize search results and is not easy for users to quickly navigate to their desired HMPs. To address this problem, we developed a navigation interface for retrieved HMPs. Our idea is to use medical knowledge and nursing knowledge to construct a navigation hierarchy based on product categories. This hierarchy is added to the left side of each search result Web page to help users move through retrieved HMPs. We demonstrate the effectiveness of our techniques using USMLE medical exam cases.
User-oriented evaluation of a medical image retrieval system for radiologists.
Markonis, Dimitrios; Holzer, Markus; Baroz, Frederic; De Castaneda, Rafael Luis Ruiz; Boyer, Célia; Langs, Georg; Müller, Henning
2015-10-01
This article reports the user-oriented evaluation of a text- and content-based medical image retrieval system. User tests with radiologists using a search system for images in the medical literature are presented. The goal of the tests is to assess the usability of the system, identify system and interface aspects that need improvement and useful additions. Another objective is to investigate the system's added value to radiology information retrieval. The study provides an insight into required specifications and potential shortcomings of medical image retrieval systems through a concrete methodology for conducting user tests. User tests with a working image retrieval system of images from the biomedical literature were performed in an iterative manner, where each iteration had the participants perform radiology information seeking tasks and then refining the system as well as the user study design itself. During these tasks the interaction of the users with the system was monitored, usability aspects were measured, retrieval success rates recorded and feedback was collected through survey forms. In total, 16 radiologists participated in the user tests. The success rates in finding relevant information were on average 87% and 78% for image and case retrieval tasks, respectively. The average time for a successful search was below 3 min in both cases. Users felt quickly comfortable with the novel techniques and tools (after 5 to 15 min), such as content-based image retrieval and relevance feedback. User satisfaction measures show a very positive attitude toward the system's functionalities while the user feedback helped identifying the system's weak points. The participants proposed several potentially useful new functionalities, such as filtering by imaging modality and search for articles using image examples. The iterative character of the evaluation helped to obtain diverse and detailed feedback on all system aspects. Radiologists are quickly familiar with the functionalities but have several comments on desired functionalities. The analysis of the results can potentially assist system refinement for future medical information retrieval systems. Moreover, the methodology presented as well as the discussion on the limitations and challenges of such studies can be useful for user-oriented medical image retrieval evaluation, as user-oriented evaluation of interactive system is still only rarely performed. Such interactive evaluations can be limited in effort if done iteratively and can give many insights for developing better systems. Copyright © 2015. Published by Elsevier Ireland Ltd.
Purkis, Helena M; Lester, Kathryn J; Field, Andy P
2011-12-01
If there is a spider in the room, then the spider phobic in your group is most likely to point it out to you. This phenomenon is believed to arise because our attentional systems are hardwired to attend to threat in our environment, and, to a spider phobic, spiders are threatening. However, an alternative explanation is simply that attention is quickly drawn to the stimulus of most personal relevance in the environment. Our research examined whether positive stimuli with no biological or evolutionary relevance could be allocated preferential attention. We compared attention to pictures of spiders with pictures from the TV program Doctor Who, for people who varied in both their love of Doctor Who and their fear of spiders. We found a double dissociation: interference from spider and Doctor-Who-related images in a visual search task was predicted by spider fear and Doctor Who expertise, respectively. As such, allocation of attention reflected the personal relevance of the images rather than their threat content. The attentional system believed to have a causal role in anxiety disorders is therefore likely to be a general system that responds not to threat but to stimulus relevance; hence, nonevolutionary images, such as those from Doctor Who, captured attention as quickly as fear-relevant spider images. Where this leaves the Empress of Racnoss, we are unsure. (c) 2011 APA, all rights reserved.
Evaluation of search strategies for microcalcifications and masses in 3D images
NASA Astrophysics Data System (ADS)
Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.
2018-03-01
Medical imaging is quickly evolving towards 3D image modalities such as computed tomography (CT), magnetic resonance imaging (MRI) and digital breast tomosynthesis (DBT). These 3D image modalities add volumetric information but further increase the need for radiologists to search through the image data set. Although much is known about search strategies in 2D images less is known about the functional consequences of different 3D search strategies. We instructed readers to use two different search strategies: drillers had their eye movements restricted to a few regions while they quickly scrolled through the image stack, scanners explored through eye movements the 2D slices. We used real-time eye position monitoring to ensure observers followed the drilling or the scanning strategy while approximately preserving the percentage of the volumetric data covered by the useful field of view. We investigated search for two signals: a simulated microcalcification and a larger simulated mass. Results show an interaction between the search strategy and lesion type. In particular, scanning provided significantly better detectability for microcalcifications at the cost of 5 times more time to search while there was little change in the detectability for the larger simulated masses. Analyses of eye movements support the hypothesis that the effectiveness of a search strategy in 3D imaging arises from the interaction of the fixational sampling of visual information and the signals' visibility in the visual periphery.
Plug Your Users into Library Resources with OpenSearch Plug-Ins
ERIC Educational Resources Information Center
Baker, Nicholas C.
2007-01-01
To bring the library catalog and other online resources right into users' workspace quickly and easily without needing much more than a short XML file, the author, a reference and Web services librarian at Williams College, learned to build and use OpenSearch plug-ins. OpenSearch is a set of simple technologies and standards that allows the…
Parameters. US Army War College Quarterly. Volume 25. Number 1. Spring 1995,
1995-01-01
major reason the fratricide rate remains so high is that imperfect human skills and judgment needed to employ weapon systems quickly degrade under...and rehearsals before the Desert Storm ground campaign, the residual rate of fratricide remained unacceptably high. Nor are the high rates at our combat...California State University, Fullerton. He is the editor and an author of The Search For Strategy: Politics and Strategic Vision. Spring 1995 31 Haiti, Peru
Usability/Sentiment for the Enterprise and ENTERPRISE
NASA Technical Reports Server (NTRS)
Meza, David; Berndt, Sarah
2014-01-01
The purpose of the Sentiment of Search Study for NASA Johnson Space Center (JSC) is to gain insight into the intranet search environment. With an initial usability survey, the authors were able to determine a usability score based on the Systems Usability Scale (SUS). Created in 1986, the freely available, well cited, SUS is commonly used to determine user perceptions of a system (in this case the intranet search environment). As with any improvement initiative, one must first examine and document the current reality of the situation. In this scenario, a method was needed to determine the usability of a search interface in addition to the user's perception on how well the search system was providing results. The use of the SUS provided a mechanism to quickly ascertain information in both areas, by adding one additional open-ended question at the end. The first ten questions allowed us to examine the usability of the system, while the last questions informed us on how the users rated the performance of the search results. The final analysis provides us with a better understanding of the current situation and areas to focus on for improvement. The power of search applications to enhance knowledge transfer is indisputable. The performance impact for any user unable to find needed information undermines project lifecycle, resource and scheduling requirements. Ever-increasing complexity of content and the user interface make usability considerations for the intranet, especially for search, a necessity instead of a 'nice-to-have'. Despite these arguments, intranet usability is largely disregarded due to lack of attention beyond the functionality of the infrastructure (White, 2013). The data collected from users of the JSC search system revealed their overall sentiment by means of the widely-known System Usability Scale. Results of the scores suggest 75%, +/-0.04, of the population rank the search system below average. In terms of a grading scaled, this equated to D or lower. It is obvious JSC users are not satisfied with the current situation, however they are eager to provide information and assistance in improving the search system. A majority of the respondents provided feedback on the issues most troubling them. This information will be used to enrich the next phase, root cause analysis and solution creation.
Fundamental resource-allocating model in colleges and universities based on Immune Clone Algorithms
NASA Astrophysics Data System (ADS)
Ye, Mengdie
2017-05-01
In this thesis we will seek the combination of antibodies and antigens converted from the optimal course arrangement and make an analogy with Immune Clone Algorithms. According to the character of the Algorithms, we apply clone, clone gene and clone selection to arrange courses. Clone operator can combine evolutionary search and random search, global search and local search. By cloning and clone mutating candidate solutions, we can find the global optimal solution quickly.
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
PubMed vs. HighWire Press: a head-to-head comparison of two medical literature search engines.
Vanhecke, Thomas E; Barnes, Michael A; Zimmerman, Janet; Shoichet, Sandor
2007-09-01
PubMed and HighWire Press are both useful medical literature search engines available for free to anyone on the internet. We measured retrieval accuracy, number of results generated, retrieval speed, features and search tools on HighWire Press and PubMed using the quick search features of each. We found that using HighWire Press resulted in a higher likelihood of retrieving the desired article and higher number of search results than the same search on PubMed. PubMed was faster than HighWire Press in delivering search results regardless of search settings. There are considerable differences in search features between these two search engines.
NASA Astrophysics Data System (ADS)
Hughes, J. S.; Crichton, D. J.; Hardman, S. H.; Mattman, C. A.; Ramirez, P. M.
2009-12-01
Experience suggests that no single search paradigm will meet all of a community’s search requirements. Traditional forms based search is still considered critical by a significant percentage of most science communities. However text base and facet based search are improving the community’s perception that search can be easy and that the data is available and can be located. Finally semantic search promises ways to find data that were not conceived when the metadata was first captured and organized. This situation suggests that successful science information systems must be able to deploy new search applications quickly, efficiently, and often for ad-hoc purposes. Federated registries allow data to be packaged or associated with their metadata and managed as simple registry objects. Standard reference models for federated registries now exist that ensure registry objects are uniquely identified at registration and that versioning, classification, and cataloging are addressed automatically. Distributed but locally governed, federated registries also provide notification of registry events and federated query, linking, and replication of registry objects. Key principles for shared ontology development in the space sciences are that the ontology remains independent of its implementation and be extensible, flexible and scalable. The dichotomy between digital things and physical/conceptual things in the domain need to be unified under a standard model, such as the Open Archive Information System (OAIS) Information Object. Finally the fact must be accepted that ontology development is a difficult task that requires time, patience and experts in both the science domain and information modeling. The Planetary Data System (PDS) has adopted this architecture for it next generation information system, PDS 2010. The authors will report on progress, briefly describe key elements, and illustrate how the new system will be phased into operations to handle both legacy and new science data. In particular the shared ontology is being used to drive system implementation through the generation of standards documents and software configuration files. The resulting information system will help meet the expectations of modern scientists by providing more of the information interconnectedness, correlative science, and system interoperability that they desire. Fig.1 - Data Driven Architecture
Retrieving clinical evidence: a comparison of PubMed and Google Scholar for quick clinical searches.
Shariff, Salimah Z; Bejaimal, Shayna Ad; Sontrop, Jessica M; Iansavichus, Arthur V; Haynes, R Brian; Weir, Matthew A; Garg, Amit X
2013-08-15
Physicians frequently search PubMed for information to guide patient care. More recently, Google Scholar has gained popularity as another freely accessible bibliographic database. To compare the performance of searches in PubMed and Google Scholar. We surveyed nephrologists (kidney specialists) and provided each with a unique clinical question derived from 100 renal therapy systematic reviews. Each physician provided the search terms they would type into a bibliographic database to locate evidence to answer the clinical question. We executed each of these searches in PubMed and Google Scholar and compared results for the first 40 records retrieved (equivalent to 2 default search pages in PubMed). We evaluated the recall (proportion of relevant articles found) and precision (ratio of relevant to nonrelevant articles) of the searches performed in PubMed and Google Scholar. Primary studies included in the systematic reviews served as the reference standard for relevant articles. We further documented whether relevant articles were available as free full-texts. Compared with PubMed, the average search in Google Scholar retrieved twice as many relevant articles (PubMed: 11%; Google Scholar: 22%; P<.001). Precision was similar in both databases (PubMed: 6%; Google Scholar: 8%; P=.07). Google Scholar provided significantly greater access to free full-text publications (PubMed: 5%; Google Scholar: 14%; P<.001). For quick clinical searches, Google Scholar returns twice as many relevant articles as PubMed and provides greater access to free full-text articles.
Yang, Shu; Qiu, Yuyan; Shi, Bo
2016-09-01
This paper explores the methods of building the internet of things of a regional ECG monitoring, focused on the implementation of ECG monitoring center based on cloud computing platform. It analyzes implementation principles of automatic identifi cation in the types of arrhythmia. It also studies the system architecture and key techniques of cloud computing platform, including server load balancing technology, reliable storage of massive smalfi les and the implications of quick search function.
Asteroids Search Results in Large Photographic Sky Surveys
NASA Astrophysics Data System (ADS)
Shatokhina, S. V.; Kazantseva, L. V.; Yizhakevych, O. M.; Eglitis, I.; Andruk, V. M.
Photographic observations of XX century contained numerous and varied information about all objects and events of the Universe fixed on plates. The original and interesting observations of small bodies of the Solar system in previous years can be selected and used for various scientific tasks. Existing databases and online services can help make such selection easily and quickly. The observations of chronologically earlier ppositions, photometric evaluation of brightness for long periods of time allow refining the orbits of asteroids and identifying various non-stationaries. Photographic observations of Northern Sky Survey project and observations of clusters in UBVR bands were used for global search for small bodies of Solar system. Total we founded 2486 positions of asteroids and 13 positions of comets. All positions were compared with ephemeris. It was found that 80 positions of asteroids have a moment of observation preceding their discovery, and 19 of them are chronologically the earliest observations of these asteroids in the world.
Development of dog-like retrieving capability in a ground robot
NASA Astrophysics Data System (ADS)
MacKenzie, Douglas C.; Ashok, Rahul; Rehg, James M.; Witus, Gary
2013-01-01
This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the Georgia Institute of Technology, and Wayne State University. Important computer vision aspects of the project were the ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's performance in the competition will demonstrate the system's successes in real-world testing.
Model authoring system for fail safe analysis
NASA Technical Reports Server (NTRS)
Sikora, Scott E.
1990-01-01
The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.
A review of electronic medical record keeping on mobile medical service trips in austere settings.
Dainton, Christopher; Chu, Charlene H
2017-02-01
Electronic medical records (EMRs) may address the need for decision and language support for Western clinicians on mobile medical service trips (MSTs) in low resource settings abroad, while providing improved access to records and data management. However, there has yet to be a review of this emerging technology used by MSTs in low-resource settings. The aim of this study is to describe EMR systems designed specifically for use by mobile MSTs in remote settings, and accordingly, determine new opportunities for this technology to improve quality of healthcare provided by MSTs. A MEDLINE, EMBASE, and Scopus/IEEE search and supplementary Google search were performed for EMR systems specific to mobile MSTs. Information was extracted regarding EMR name, organization, scope of use, platform, open source coding, commercial availability, data integration, and capacity for linguistic and decision support. Missing information was requested by email. After screening of 122 abstracts, two articles remained that discussed deployment of EMR systems in MST settings (iChart, SmartList To Go), and thirteen additional EMR systems were found through the Google search. Of these, three systems (Project Buendia, TEBOW, and University of Central Florida's internally developed EMR) are based on modified versions of Open MRS software, while three are smartphone apps (QuickChart EMR, iChart, NotesFirst). Most of the systems use a local network to manage data, while the remaining systems use opportunistic cloud synchronization. Three (TimmyCare, Basil, and Backpack EMR) contain multilingual user interfaces, and only one (QuickChart EMR) contained MST-specific clinical decision support. There have been limited attempts to tailor EMRs to mobile MSTs. Only Open MRS has a broad user base, and other EMR systems should consider interoperability and data sharing with larger systems as a priority. Several systems include tablet compatibility, or are specifically designed for smartphone, which may be helpful given the environment and low resource context. Results from this review may be useful to non-government organizations (NGOs) considering modernization of their medical records practices as EMR use facilitates research, decreases paper administration costs, and improves perceptions of professionalism; however, most MST-specific EMRs remain in their early stages, and further development and research is required before reaching the stage of widespread adoption. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
OneSearch Gives You Access to More Than 7,000 Publishers and Content Providers | Poster
By Robin Meckley, Contributing Writer OneSearch, an exciting new resource from the Scientific Library, is now available to the NCI at Frederick community. This new resource provides a quick and easy way to search multiple Scientific Library resources and collections using a single search box for journal articles, books, media, and more. A large central index is compiled from more than 7,000 publishers and content providers outside the library’s holdings.
Using Genetic Programming with Prior Formula Knowledge to Solve Symbolic Regression Problem.
Lu, Qiang; Ren, Jun; Wang, Zhiguang
2016-01-01
A researcher can infer mathematical expressions of functions quickly by using his professional knowledge (called Prior Knowledge). But the results he finds may be biased and restricted to his research field due to limitation of his knowledge. In contrast, Genetic Programming method can discover fitted mathematical expressions from the huge search space through running evolutionary algorithms. And its results can be generalized to accommodate different fields of knowledge. However, since GP has to search a huge space, its speed of finding the results is rather slow. Therefore, in this paper, a framework of connection between Prior Formula Knowledge and GP (PFK-GP) is proposed to reduce the space of GP searching. The PFK is built based on the Deep Belief Network (DBN) which can identify candidate formulas that are consistent with the features of experimental data. By using these candidate formulas as the seed of a randomly generated population, PFK-GP finds the right formulas quickly by exploring the search space of data features. We have compared PFK-GP with Pareto GP on regression of eight benchmark problems. The experimental results confirm that the PFK-GP can reduce the search space and obtain the significant improvement in the quality of SR.
Ayers, John W; Ribisl, Kurt M; Brownstein, John S
2011-04-01
Public interest in electronic nicotine delivery systems (ENDS) is undocumented. By monitoring search queries, ENDS popularity and correlates of their popularity were assessed in Australia, Canada, the United Kingdom (UK), and the U.S. English-language Google searches conducted from January 2008 through September 2010 were compared to snus, nicotine replacement therapy (NRT), and Chantix® or Champix®. Searches for each week were scaled to the highest weekly search proportion (100), with lower values indicating the relative search proportion compared to the highest-proportion week (e.g., 50=50% of the highest observed proportion). Analyses were performed in 2010. From July 2008 through February 2010, ENDS searches increased in all nations studied except Australia, there an increase occurred more recently. By September 2010, ENDS searches were several-hundred-fold greater than searches for smoking alternatives in the UK and U.S., and were rivaling alternatives in Australia and Canada. Across nations, ENDS searches were highest in the U.S., followed by similar search intensity in Canada and the UK, with Australia having the fewest ENDS searches. Stronger tobacco control, created by clean indoor air laws, cigarette taxes, and anti-smoking populations, were associated with consistently higher levels of ENDS searches. The online popularity of ENDS has surpassed that of snus and NRTs, which have been on the market for far longer, and is quickly outpacing Chantix or Champix. In part, the association between ENDS's popularity and stronger tobacco control suggests ENDS are used to bypass, or quit in response to, smoking restrictions. Search query surveillance is a valuable, real-time, free, and public method to evaluate the diffusion of new health products. This method may be generalized to other behavioral, biological, informational, or psychological outcomes manifested on search engines. Copyright © 2011 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Curious Consequences of a Miscopied Quadratic
ERIC Educational Resources Information Center
Poet, Jeffrey L.; Vestal, Donald L., Jr.
2005-01-01
The starting point of this article is a search for pairs of quadratic polynomials x[superscript 2] + bx plus or minus c with the property that they both factor over the integers. The search leads quickly to some number theory in the form of primitive Pythagorean triples, and this paper develops the connection between these two topics.
Beyond Job Search or Basic Education: Rethinking the Role of Skills in Welfare Reform.
ERIC Educational Resources Information Center
Strawn, Julie
Most welfare-to-work programs may be classified as quick employment programs emphasizing individual or group job searches or skill-building programs emphasizing basic education. Although both types of programs offer benefits, they also suffer from significant limitations. To be more effective than their predecessors, current-generation…
Riding the Information Highway--Towards a New Kind of Learning
ERIC Educational Resources Information Center
Aro, Mikko; Olkinuora, Erkki
2007-01-01
In the modern world, skimming through information quickly and finding the important nuggets of knowledge from amongst the information overload is an essential skill. One way to train oneself for this kind of literacy is reading on the internet, which requires continuous assessment of search results and specifying searches. In this article a…
Laser Direct Routing for High Density Interconnects
NASA Astrophysics Data System (ADS)
Moreno, Wilfrido Alejandro
The laser restructuring of electronic circuits fabricated using standard Very Large Scale Integration (VLSI) process techniques, is an excellent alternative that allows low-cost quick turnaround production with full circuit similarity between the Laser Restructured prototype and the customized product for mass production. Laser Restructurable VLSI (LRVLSI) would allow design engineers the capability to interconnect cells that implement generic logic functions and signal processing schemes to achieve a higher level of design complexity. LRVLSI of a particular circuit at the wafer or packaged chip level is accomplished using an integrated computer controlled laser system to create low electrical resistance links between conductors and to cut conductor lines. An infrastructure for rapid prototyping and quick turnaround using Laser Restructuring of VLSI circuits was developed to meet three main parallel objectives: to pursue research on novel interconnect technologies using LRVLSI, to develop the capability of operating in a quick turnaround mode, and to maintain standardization and compatibility with commercially available equipment for feasible technology transfer. The system is to possess a high degree of flexibility, high data quality, total controllability, full documentation, short downtime, a user-friendly operator interface, automation, historical record keeping, and error indication and logging. A specially designed chip "SLINKY" was used as the test vehicle for the complete characterization of the Laser Restructuring system. With the use of Design of Experiment techniques the Lateral Diffused Link (LDL), developed originally at MIT Lincoln Laboratories, was completely characterized and for the first time a set of optimum process parameters was obtained. With the designed infrastructure fully operational, the priority objective was the search for a substitute for the high resistance, high current leakage to substrate, and relatively low density Lateral Diffused Link. A high density Laser Vertical Link with resistance values below 10 ohms was developed, studied and tested using design of experiment methodologies. The vertical link offers excellent advantages in the area of quick prototyping of electronic circuits, but even more important, due to having similar characteristics to a foundry produced via, it gives quick transfer from the prototype system verification stage to the mass production stage.
The Eclipsing Binary On-Line Atlas (EBOLA)
NASA Astrophysics Data System (ADS)
Bradstreet, D. H.; Steelman, D. P.; Sanders, S. J.; Hargis, J. R.
2004-05-01
In conjunction with the upcoming release of \\it Binary Maker 3.0, an extensive on-line database of eclipsing binaries is being made available. The purposes of the atlas are: \\begin {enumerate} Allow quick and easy access to information on published eclipsing binaries. Amass a consistent database of light and radial velocity curve solutions to aid in solving new systems. Provide invaluable querying capabilities on all of the parameters of the systems so that informative research can be quickly accomplished on a multitude of published results. Aid observers in establishing new observing programs based upon stars needing new light and/or radial velocity curves. Encourage workers to submit their published results so that others may have easy access to their work. Provide a vast but easily accessible storehouse of information on eclipsing binaries to accelerate the process of understanding analysis techniques and current work in the field. \\end {enumerate} The database will eventually consist of all published eclipsing binaries with light curve solutions. The following information and data will be supplied whenever available for each binary: original light curves in all bandpasses, original radial velocity observations, light curve parameters, RA and Dec, V-magnitudes, spectral types, color indices, periods, binary type, 3D representation of the system near quadrature, plots of the original light curves and synthetic models, plots of the radial velocity observations with theoretical models, and \\it Binary Maker 3.0 data files (parameter, light curve, radial velocity). The pertinent references for each star are also given with hyperlinks directly to the papers via the NASA Abstract website for downloading, if available. In addition the Atlas has extensive searching options so that workers can specifically search for binaries with specific characteristics. The website has more than 150 systems already uploaded. The URL for the site is http://ebola.eastern.edu/.
(Quickly) Testing the Tester via Path Coverage
NASA Technical Reports Server (NTRS)
Groce, Alex
2009-01-01
The configuration complexity and code size of an automated testing framework may grow to a point that the tester itself becomes a significant software artifact, prone to poor configuration and implementation errors. Unfortunately, testing the tester by using old versions of the software under test (SUT) may be impractical or impossible: test framework changes may have been motivated by interface changes in the tested system, or fault detection may become too expensive in terms of computing time to justify running until errors are detected on older versions of the software. We propose the use of path coverage measures as a "quick and dirty" method for detecting many faults in complex test frameworks. We also note the possibility of using techniques developed to diversify state-space searches in model checking to diversify test focus, and an associated classification of tester changes into focus-changing and non-focus-changing modifications.
Quantum Search in Hilbert Space
NASA Technical Reports Server (NTRS)
Zak, Michail
2003-01-01
A proposed quantum-computing algorithm would perform a search for an item of information in a database stored in a Hilbert-space memory structure. The algorithm is intended to make it possible to search relatively quickly through a large database under conditions in which available computing resources would otherwise be considered inadequate to perform such a task. The algorithm would apply, more specifically, to a relational database in which information would be stored in a set of N complex orthonormal vectors, each of N dimensions (where N can be exponentially large). Each vector would constitute one row of a unitary matrix, from which one would derive the Hamiltonian operator (and hence the evolutionary operator) of a quantum system. In other words, all the stored information would be mapped onto a unitary operator acting on a quantum state that would represent the item of information to be retrieved. Then one could exploit quantum parallelism: one could pose all search queries simultaneously by performing a quantum measurement on the system. In so doing, one would effectively solve the search problem in one computational step. One could exploit the direct- and inner-product decomposability of the unitary matrix to make the dimensionality of the memory space exponentially large by use of only linear resources. However, inasmuch as the necessary preprocessing (the mapping of the stored information into a Hilbert space) could be exponentially expensive, the proposed algorithm would likely be most beneficial in applications in which the resources available for preprocessing were much greater than those available for searching.
GODIVA2: interactive visualization of environmental data on the Web.
Blower, J D; Haines, K; Santokhee, A; Liu, C L
2009-03-13
GODIVA2 is a dynamic website that provides visual access to several terabytes of physically distributed, four-dimensional environmental data. It allows users to explore large datasets interactively without the need to install new software or download and understand complex data. Through the use of open international standards, GODIVA2 maintains a high level of interoperability with third-party systems, allowing diverse datasets to be mutually compared. Scientists can use the system to search for features in large datasets and to diagnose the output from numerical simulations and data processing algorithms. Data providers around Europe have adopted GODIVA2 as an INSPIRE-compliant dynamic quick-view system for providing visual access to their data.
Retrieving Clinical Evidence: A Comparison of PubMed and Google Scholar for Quick Clinical Searches
Bejaimal, Shayna AD; Sontrop, Jessica M; Iansavichus, Arthur V; Haynes, R Brian; Weir, Matthew A; Garg, Amit X
2013-01-01
Background Physicians frequently search PubMed for information to guide patient care. More recently, Google Scholar has gained popularity as another freely accessible bibliographic database. Objective To compare the performance of searches in PubMed and Google Scholar. Methods We surveyed nephrologists (kidney specialists) and provided each with a unique clinical question derived from 100 renal therapy systematic reviews. Each physician provided the search terms they would type into a bibliographic database to locate evidence to answer the clinical question. We executed each of these searches in PubMed and Google Scholar and compared results for the first 40 records retrieved (equivalent to 2 default search pages in PubMed). We evaluated the recall (proportion of relevant articles found) and precision (ratio of relevant to nonrelevant articles) of the searches performed in PubMed and Google Scholar. Primary studies included in the systematic reviews served as the reference standard for relevant articles. We further documented whether relevant articles were available as free full-texts. Results Compared with PubMed, the average search in Google Scholar retrieved twice as many relevant articles (PubMed: 11%; Google Scholar: 22%; P<.001). Precision was similar in both databases (PubMed: 6%; Google Scholar: 8%; P=.07). Google Scholar provided significantly greater access to free full-text publications (PubMed: 5%; Google Scholar: 14%; P<.001). Conclusions For quick clinical searches, Google Scholar returns twice as many relevant articles as PubMed and provides greater access to free full-text articles. PMID:23948488
Vedaa, Øystein; Harris, Anette; Bjorvatn, Bjørn; Waage, Siri; Sivertsen, Børge; Tucker, Philip; Pallesen, Ståle
2016-01-01
A systematic literature search was carried out to investigate the relationship between quick returns (i.e., 11.0 hours or less between two consecutive shifts) and outcome measures of health, sleep, functional ability and work-life balance. A total of 22 studies published in 21 articles were included. Three types of quick returns were differentiated (from evening to morning/day, night to evening, morning/day to night shifts) where sleep duration and sleepiness appeared to be differently affected depending on which shifts the quick returns occurred between. There were some indications of detrimental effects of quick returns on proximate problems (e.g., sleep, sleepiness and fatigue), although the evidence of associations with more chronic outcome measures (physical and mental health and work-life balance) was inconclusive. Modern societies are dependent on people working shifts. This study systematically reviews literature on the consequences of quick returns (11.0 hours or less between two shifts). Quick returns have detrimental effects on acute health problems. However, the evidence regarding effects on chronic health is inconclusive.
ERIC Educational Resources Information Center
Dysart, Joe
2008-01-01
Given Google's growing market share--69% of all searches by the close of 2007--it's absolutely critical for any school on the Web to ensure its site is Google-friendly. A Google-optimized site ensures that students and parents can quickly find one's district on the Web even if they don't know the address. Plus, good search optimization simply…
Nonpoint-Source Pollution Issues. January 1990-November 1994. QB 95-01. Quick Bibliography Series.
ERIC Educational Resources Information Center
Makuch, Joe
Citations in this bibliography are intended to be a substantial resource for recent investigations (January 1990-November 1994) on nonpoint source pollution and were obtained from a search of the National Agriculture Library's AGRICOLA database. The 196 citations are indexed by author and subject. A representation of the search strategy is…
Architect for Research on Gender and Community Colleges
ERIC Educational Resources Information Center
Lester, Jaime
2009-01-01
A quick search in the "Community College Journal of Research and Practice" for Barbara Townsend's name produces 62 entries. A handful of those entries are the articles that Barbara has authored, but many more are articles that cite her work. Another search on the Web of Science database that tracks citations in a specific set of peer-reviewed…
How IECs Fit into the Counseling Puzzle
ERIC Educational Resources Information Center
Brown, Andy
2017-01-01
Independent educational consultants (IECs) are rapidly becoming a regular part of the college search and admission process as students search for the right fit. Although the cost of going to college has risen sharply in recent years, the cost of hiring a consultant hasn't gone up nearly so quickly. Families are beginning to see consulting as…
Forest restoration is forward thinking
R. Kasten Dumroese; Brian J. Palik; John A. Stanturf
2015-01-01
It is not surprising to us that the topic of forest restoration is being discussed in the Journal of Forestry. It is a topic frequently bantered about in the literature; a quick search in Google Scholar for "forest restoration" generates more than 1 million hits. A significant portion of the debate centers on the search for succinct, holistic, universally...
Fermilab | Science | Questions for the Universe | Einstein's Dream of
Toggle Search Search Home About Science Jobs Contact Phone Book Newsroom Newsroom News and features Press process For the media Video of shutdown event Guest book Tevatron Impact June 11, 2012 About the symposium Office of Science Security, Privacy, Legal Use of Cookies Quick Links Home Contact Phone Book Fermilab at
The Complete Get That Job! A Quick and Easy Guide with Worksheets.
ERIC Educational Resources Information Center
2001
Written for adult new readers, this workbook contains 14 chapters of information on career development, job search and job retention skills. Chapters contain information, worksheets, examples, and summary sheets. The guide is intended to help adults use basic skills to decide what they can do well, identify their job search goals, pick the best…
NASA Technical Reports Server (NTRS)
1976-01-01
The Outlook for Space Study, consideration of National needs and OAST technology goals were factors in the selection of the following themes for candidate technical initiative and supporting program plans: space power station; search for extraterrestrial life; industrialization of space; global service station; exploration of the solar system; and advanced space transportation system. An overview is presented of the Space Theme Workshop activities in developing technology needs, program requirements, and proposed plans in support of each theme. The unedited working papers used by team members are included.
State Employees [Department of Environmental Conservation / Division of Water / A-Z Quick Links] Search : http://dec.alaska.gov/water.aspx Department of Environmental Conservation Division of Water 410
ClinicalKey 2.0: Upgrades in a Point-of-Care Search Engine.
Huslig, Mary Ann; Vardell, Emily
2015-01-01
ClinicalKey 2.0, launched September 23, 2014, offers a mobile-friendly design with a search history feature for targeting point-of-care resources for health care professionals. Browsing is improved with searchable, filterable listings of sources highlighting new resources. ClinicalKey 2.0 improvements include more than 1,400 new Topic Pages for quick access to point-of-care content. A sample search details some of the upgrades and content options.
A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic
Qi, Jin-Peng; Qi, Jie; Zhang, Qing
2016-01-01
Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364
A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.
Qi, Jin-Peng; Qi, Jie; Zhang, Qing
2016-01-01
Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.
QuickVina: accelerating AutoDock Vina using gradient-based heuristics for global optimization.
Handoko, Stephanus Daniel; Ouyang, Xuchang; Su, Chinh Tran To; Kwoh, Chee Keong; Ong, Yew Soon
2012-01-01
Predicting binding between macromolecule and small molecule is a crucial phase in the field of rational drug design. AutoDock Vina, one of the most widely used docking software released in 2009, uses an empirical scoring function to evaluate the binding affinity between the molecules and employs the iterated local search global optimizer for global optimization, achieving a significantly improved speed and better accuracy of the binding mode prediction compared its predecessor, AutoDock 4. In this paper, we propose further improvement in the local search algorithm of Vina by heuristically preventing some intermediate points from undergoing local search. Our improved version of Vina-dubbed QVina-achieved a maximum acceleration of about 25 times with the average speed-up of 8.34 times compared to the original Vina when tested on a set of 231 protein-ligand complexes while maintaining the optimal scores mostly identical. Using our heuristics, larger number of different ligands can be quickly screened against a given receptor within the same time frame.
The Quantum Approximation Optimization Algorithm for MaxCut: A Fermionic View
NASA Technical Reports Server (NTRS)
Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.
2017-01-01
Farhi et al. recently proposed a class of quantum algorithms, the Quantum Approximate Optimization Algorithm (QAOA), for approximately solving combinatorial optimization problems. A level-p QAOA circuit consists of steps in which a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2p times for which these two Hamiltonians are applied are the parameters of the algorithm. As p increases, however, the parameter search space grows quickly. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here, we analytically and numerically study parameter setting for QAOA applied to MAXCUT. For level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MAXCUT, the Ring of Disagrees, or the 1D antiferromagnetic ring, we provide an analysis for arbitrarily high level. Using a Fermionic representation, the evolution of the system under QAOA translates into quantum optimal control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of QAOA for any p. It also greatly simplifies numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional sub-manifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.
... Staying Safe Videos for Educators Search English Español Ketamine KidsHealth / For Teens / Ketamine Print en español Ketamina What It Is: Ketamine hydrochloride is a quick-acting anesthetic that is ...
fasab.gov Search Go About Board Activities Standards & Guidance Projects Resources Technical (7C13) Quick Links Handbook by Chapter Board Briefing Materials Subscribe to Mailing List fasab.gov
Predicting Airport Screening Officers' Visual Search Competency With a Rapid Assessment.
Mitroff, Stephen R; Ericson, Justin M; Sharpe, Benjamin
2018-03-01
Objective The study's objective was to assess a new personnel selection and assessment tool for aviation security screeners. A mobile app was modified to create a tool, and the question was whether it could predict professional screeners' on-job performance. Background A variety of professions (airport security, radiology, the military, etc.) rely on visual search performance-being able to detect targets. Given the importance of such professions, it is necessary to maximize performance, and one means to do so is to select individuals who excel at visual search. A critical question is whether it is possible to predict search competency within a professional search environment. Method Professional searchers from the USA Transportation Security Administration (TSA) completed a rapid assessment on a tablet-based X-ray simulator (XRAY Screener, derived from the mobile technology app Airport Scanner; Kedlin Company). The assessment contained 72 trials that were simulated X-ray images of bags. Participants searched for prohibited items and tapped on them with their finger. Results Performance on the assessment significantly related to on-job performance measures for the TSA officers such that those who were better XRAY Screener performers were both more accurate and faster at the actual airport checkpoint. Conclusion XRAY Screener successfully predicted on-job performance for professional aviation security officers. While questions remain about the underlying cognitive mechanisms, this quick assessment was found to significantly predict on-job success for a task that relies on visual search performance. Application It may be possible to quickly assess an individual's visual search competency, which could help organizations select new hires and assess their current workforce.
Data Processing Aspects of MEDLARS
Austin, Charles J.
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files. PMID:14119287
DATA PROCESSING ASPECTS OF MEDLARS.
AUSTIN, C J
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files.
Home Site Map News Organization Search: Go www.nws.noaa.gov Search the CPC Go Download KML Day 3-7 . See static maps below this for the most up to date graphics. Categorical Outlooks Day 3-7 Day 8-14 EDT May 25 2018 Synopsis: The summer season is expected to move in quickly for much of the contiguous
ERIC Educational Resources Information Center
Allen, Tim
Citations in this bibliography are intended to be a substantial resource for recent investigations (January 1990-January 1995) on animal welfare policy and were obtained from a search of the National Agriculture Library's AGRICOLA database. A representation of the search strategy is included. The 244 citations range in topic and include animal…
Gapped Spectral Dictionaries and Their Applications for Database Searches of Tandem Mass Spectra*
Jeong, Kyowon; Kim, Sangtae; Bandeira, Nuno; Pevzner, Pavel A.
2011-01-01
Generating all plausible de novo interpretations of a peptide tandem mass (MS/MS) spectrum (Spectral Dictionary) and quickly matching them against the database represent a recently emerged alternative approach to peptide identification. However, the sizes of the Spectral Dictionaries quickly grow with the peptide length making their generation impractical for long peptides. We introduce Gapped Spectral Dictionaries (all plausible de novo interpretations with gaps) that can be easily generated for any peptide length thus addressing the limitation of the Spectral Dictionary approach. We show that Gapped Spectral Dictionaries are small thus opening a possibility of using them to speed-up MS/MS searches. Our MS-GappedDictionary algorithm (based on Gapped Spectral Dictionaries) enables proteogenomics applications (such as searches in the six-frame translation of the human genome) that are prohibitively time consuming with existing approaches. MS-GappedDictionary generates gapped peptides that occupy a niche between accurate but short peptide sequence tags and long but inaccurate full length peptide reconstructions. We show that, contrary to conventional wisdom, some high-quality spectra do not have good peptide sequence tags and introduce gapped tags that have advantages over the conventional peptide sequence tags in MS/MS database searches. PMID:21444829
... Recalls Media Center Blog Videos Newsletter facebook twitter instagram pinterest gplus youtube Search Menu Why It Matters ... Recalls Media Center Blog Videos Newsletter facebook twitter instagram pinterest gplus youtube Sign up for quick tips ...
AIRSAR Web-Based Data Processing
NASA Technical Reports Server (NTRS)
Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne
2007-01-01
The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.
Aquatic toxicity information retrieval data base (AQUIRE for non-vms) (1600 bpi). Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The purpose of AQUIRE is to provide scientists and managers quick access to a comprehensive, systematic, computerized compilation of aquatic toxicity data. During 1992 and early 1993, nine data updates were made to the AQUIRE system. AQUIRE now contains 109,338 individual aquatic toxicity test results for 5,159 chemicals, 2,429 organisms, and over 160 endpoints reviewed from 7,517 publications. New features include a data selection option that permits searches that are restricted to data added or modified through any of the eight most recent updates, and a report generation (Full Record Detail) that displays the entire AQUIRE record for each testmore » identified in a search. Selection of the Full Record Detail feature allows the user to peruse all AQUIRE fields for a given test, including the information stored in the remarks section, while the standard AQUIRE output format presents selected data fields in a concise table. The standard report remains an available option for rapid viewing of system output.« less
Aquatic toxicity information retrieval data base (AQUIRE for non-vms) (6250 bpi). Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The purpose of AQUIRE is to provide scientists and managers quick access to a comprehensive, systematic, computerized compilation of aquatic toxicity data. During 1992 and early 1993, nine data updates were made to the AQUIRE system. AQUIRE now contains 109,338 individual aquatic toxicity test results for 5,159 chemicals, 2,429 organisms, and over 160 endpoints reviewed from 7,517 publications. New features include a data selection option that permits searches that are restricted to data added or modified through any of the eight most recent updates, and a report generation (Full Record Detail) that displays the entire AQUIRE record for each testmore » identified in a search. Selection of the Full Record Detail feature allows the user to peruse all AQUIRE fields for a given test, including the information stored in the remarks section, while the standard AQUIRE output format presents selected data fields in a concise table. The standard report remains an available option for rapid viewing of system output.« less
Asthma Patients in US Overuse Quick-Relief Inhalers, Underuse Control Medications
American Academy of Allergy Asthma & Immunology Menu Search Main navigation Skip to content Conditions & Treatments Allergies Asthma Primary Immunodeficiency Disease Related Conditions Drug Guide Conditions Dictionary Just ...
Persistence and Adaptation in Immunity: T Cells Balance the Extent and Thoroughness of Search
Fricke, G. Matthew; Letendre, Kenneth A.; Moses, Melanie E.; Cannon, Judy L.
2016-01-01
Effective search strategies have evolved in many biological systems, including the immune system. T cells are key effectors of the immune response, required for clearance of pathogenic infection. T cell activation requires that T cells encounter antigen-bearing dendritic cells within lymph nodes, thus, T cell search patterns within lymph nodes may be a crucial determinant of how quickly a T cell immune response can be initiated. Previous work suggests that T cell motion in the lymph node is similar to a Brownian random walk, however, no detailed analysis has definitively shown whether T cell movement is consistent with Brownian motion. Here, we provide a precise description of T cell motility in lymph nodes and a computational model that demonstrates how motility impacts T cell search efficiency. We find that both Brownian and Lévy walks fail to capture the complexity of T cell motion. Instead, T cell movement is better described as a correlated random walk with a heavy-tailed distribution of step lengths. Using computer simulations, we identify three distinct factors that contribute to increasing T cell search efficiency: 1) a lognormal distribution of step lengths, 2) motion that is directionally persistent over short time scales, and 3) heterogeneity in movement patterns. Furthermore, we show that T cells move differently in specific frequently visited locations that we call “hotspots” within lymph nodes, suggesting that T cells change their movement in response to the lymph node environment. Our results show that like foraging animals, T cells adapt to environmental cues, suggesting that adaption is a fundamental feature of biological search. PMID:26990103
Heliophysics Data and Modeling Research Using VSPO
NASA Technical Reports Server (NTRS)
Roberts, D. Aaron; Hesse, Michael; Cornwell, Carl
2007-01-01
The primary advantage of Virtual Observatories in scientific research is efficiency: rapid searches for and access to data in convenient forms makes it possible to explore scientific questions without spending days or weeks on ancilary tasks. The Virtual Space Physics Observatory provides a general portal to Heliophysics data for this task. Here we will illustrate the advantages of the VO approach by examining specific geomagnetically active times and tracing the activity through the Sun-Earth system. In addition to previous and additional data sources, we will demonstrate an extension of the capabilities to allow searching for model run results from the range of CCMC models. This approach allows the user to quickly compare models and observations at a qualitative level; considerably more work will be needed to develop more seamless connections to data streams and the equivalent numerical output from simulations.
PubMed and beyond: a survey of web tools for searching biomedical literature
Lu, Zhiyong
2011-01-01
The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search PMID:21245076
Discovery in a World of Mashups
NASA Astrophysics Data System (ADS)
King, T. A.; Ritschel, B.; Hourcle, J. A.; Moon, I. S.
2014-12-01
When the first digital information was stored electronically, discovery of what existed was through file names and the organization of the file system. With the advent of networks, digital information was shared on a wider scale, but discovery remained based on file and folder names. With a growing number of information sources, named based discovery quickly became ineffective. The keyword based search engine was one of the first types of a mashup in the world of Web 1.0. Embedded links from one document to another with prescribed relationships between files and the world of Web 2.0 was formed. Search engines like Google used the links to improve search results and a worldwide mashup was formed. While a vast improvement, the need for semantic (meaning rich) discovery was clear, especially for the discovery of scientific data. In response, every science discipline defined schemas to describe their type of data. Some core schemas where shared, but most schemas are custom tailored even though they share many common concepts. As with the networking of information sources, science increasingly relies on data from multiple disciplines. So there is a need to bring together multiple sources of semantically rich information. We explore how harvesting, conceptual mapping, facet based search engines, search term promotion, and style sheets can be combined to create the next generation of mashups in the emerging world of Web 3.0. We use NASA's Planetary Data System and NASA's Heliophysics Data Environment to illustrate how to create a multi-discipline mash-up.
COSMOS: Carnegie Observatories System for MultiObject Spectroscopy
NASA Astrophysics Data System (ADS)
Oemler, A.; Clardy, K.; Kelson, D.; Walth, G.; Villanueva, E.
2017-05-01
COSMOS (Carnegie Observatories System for MultiObject Spectroscopy) reduces multislit spectra obtained with the IMACS and LDSS3 spectrographs on the Magellan Telescopes. It can be used for the quick-look analysis of data at the telescope as well as for pipeline reduction of large data sets. COSMOS is based on a precise optical model of the spectrographs, which allows (after alignment and calibration) an accurate prediction of the location of spectra features. This eliminates the line search procedure which is fundamental to many spectral reduction programs, and allows a robust data pipeline to be run in an almost fully automatic mode, allowing large amounts of data to be reduced with minimal intervention.
Mercury Toolset for Spatiotemporal Metadata
NASA Technical Reports Server (NTRS)
Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James
2010-01-01
Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.
Mercury Toolset for Spatiotemporal Metadata
NASA Astrophysics Data System (ADS)
Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris
2010-06-01
Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.
Rivals in the dark: how competition influences search in decisions under uncertainty.
Phillips, Nathaniel D; Hertwig, Ralph; Kareev, Yaakov; Avrahami, Judith
2014-10-01
In choices between uncertain options, information search can increase the chances of distinguishing good from bad options. However, many choices are made in the presence of other choosers who may seize the better option while one is still engaged in search. How long do (and should) people search before choosing between uncertain options in the presence of such competition? To address this question, we introduce a new experimental paradigm called the competitive sampling game. We use both simulation and empirical data to compare search and choice between competitive and solitary environments. Simulation results show that minimal search is adaptive when one expects competitors to choose quickly or is uncertain about how long competitors will search. Descriptively, we observe that competition drastically reduces information search prior to choice. Copyright © 2014 Elsevier B.V. All rights reserved.
2015-01-01
Background PubMed is the largest biomedical bibliographic information source on the Internet. PubMed has been considered one of the most important and reliable sources of up-to-date health care evidence. Previous studies examined the effects of domain expertise/knowledge on search performance using PubMed. However, very little is known about PubMed users’ knowledge of information retrieval (IR) functions and their usage in query formulation. Objective The purpose of this study was to shed light on how experienced/nonexperienced PubMed users perform their search queries by analyzing a full-day query log. Our hypotheses were that (1) experienced PubMed users who use system functions quickly retrieve relevant documents and (2) nonexperienced PubMed users who do not use them have longer search sessions than experienced users. Methods To test these hypotheses, we analyzed PubMed query log data containing nearly 3 million queries. User sessions were divided into two categories: experienced and nonexperienced. We compared experienced and nonexperienced users per number of sessions, and experienced and nonexperienced user sessions per session length, with a focus on how fast they completed their sessions. Results To test our hypotheses, we measured how successful information retrieval was (at retrieving relevant documents), represented as the decrease rates of experienced and nonexperienced users from a session length of 1 to 2, 3, 4, and 5. The decrease rate (from a session length of 1 to 2) of the experienced users was significantly larger than that of the nonexperienced groups. Conclusions Experienced PubMed users retrieve relevant documents more quickly than nonexperienced PubMed users in terms of session length. PMID:26139516
A human-machine cooperation route planning method based on improved A* algorithm
NASA Astrophysics Data System (ADS)
Zhang, Zhengsheng; Cai, Chao
2011-12-01
To avoid the limitation of common route planning method to blindly pursue higher Machine Intelligence and autoimmunization, this paper presents a human-machine cooperation route planning method. The proposed method includes a new A* path searing strategy based on dynamic heuristic searching and a human cooperated decision strategy to prune searching area. It can overcome the shortage of A* algorithm to fall into a local long term searching. Experiments showed that this method can quickly plan a feasible route to meet the macro-policy thinking.
Spatial Query for Planetary Data
NASA Technical Reports Server (NTRS)
Shams, Khawaja S.; Crockett, Thomas M.; Powell, Mark W.; Joswig, Joseph C.; Fox, Jason M.
2011-01-01
Science investigators need to quickly and effectively assess past observations of specific locations on a planetary surface. This innovation involves a location-based search technology that was adapted and applied to planetary science data to support a spatial query capability for mission operations software. High-performance location-based searching requires the use of spatial data structures for database organization. Spatial data structures are designed to organize datasets based on their coordinates in a way that is optimized for location-based retrieval. The particular spatial data structure that was adapted for planetary data search is the R+ tree.
The ship-borne infrared searching and tracking system based on the inertial platform
NASA Astrophysics Data System (ADS)
Li, Yan; Zhang, Haibo
2011-08-01
As a result of the radar system got interferenced or in the state of half silent ,it can cause the guided precision drop badly In the modern electronic warfare, therefore it can lead to the equipment depended on electronic guidance cannot strike the incoming goals exactly. It will need to rely on optoelectronic devices to make up for its shortcomings, but when interference is in the process of radar leading ,especially the electro-optical equipment is influenced by the roll, pitch and yaw rotation ,it can affect the target appear outside of the field of optoelectronic devices for a long time, so the infrared optoelectronic equipment can not exert the superiority, and also it cannot get across weapon-control system "reverse bring" missile against incoming goals. So the conventional ship-borne infrared system unable to track the target of incoming quickly , the ability of optoelectronic rivalry declines heavily.Here we provide a brand new controlling algorithm for the semi-automatic searching and infrared tracking based on inertial navigation platform. Now it is applying well in our XX infrared optoelectronic searching and tracking system. The algorithm is mainly divided into two steps: The artificial mode turns into auto-searching when the deviation of guide exceeds the current scene under the course of leading for radar.When the threshold value of the image picked-up is satisfied by the contrast of the target in the searching scene, the speed computed by using the CA model Least Square Method feeds back to the speed loop. And then combine the infrared information to accomplish the closed-loop control of the infrared optoelectronic system tracking. The algorithm is verified via experiment. Target capturing distance is 22.3 kilometers on the great lead deviation by using the algorithm. But without using the algorithm the capturing distance declines 12 kilometers. The algorithm advances the ability of infrared optoelectronic rivalry and declines the target capturing time by using semi-automatic searching and reliable capturing-tracking, when the lead deviation of the radar is great.
OReFiL: an online resource finder for life sciences.
Yamamoto, Yasunori; Takagi, Toshihisa
2007-08-06
Many online resources for the life sciences have been developed and introduced in peer-reviewed papers recently, ranging from databases and web applications to data-analysis software. Some have been introduced in special journal issues or websites with a search function, but others remain scattered throughout the Internet and in the published literature. The searchable resources on these sites are collected and maintained manually and are therefore of higher quality than automatically updated sites, but also require more time and effort. We developed an online resource search system called OReFiL to address these issues. We developed a crawler to gather all of the web pages whose URLs appear in MEDLINE abstracts and full-text papers on the BioMed Central open-access journals. The URLs were extracted using regular expressions and rules based on our heuristic knowledge. We then indexed the online resources to facilitate their retrieval and comparison by researchers. Because every online resource has at least one PubMed ID, we can easily acquire its summary with Medical Subject Headings (MeSH) terms and confirm its credibility through reference to the corresponding PubMed entry. In addition, because OReFiL automatically extracts URLs and updates the index, minimal time and effort is needed to maintain the system. We developed OReFiL, a search system for online life science resources, which is freely available. The system's distinctive features include the ability to return up-to-date query-relevant online resources introduced in peer-reviewed papers; the ability to search using free words, MeSH terms, or author names; easy verification of each hit following links to the corresponding PubMed entry or to papers citing the URL through the search systems of BioMed Central, Scirus, HighWire Press, or Google Scholar; and quick confirmation of the existence of an online resource web page.
OReFiL: an online resource finder for life sciences
Yamamoto, Yasunori; Takagi, Toshihisa
2007-01-01
Background Many online resources for the life sciences have been developed and introduced in peer-reviewed papers recently, ranging from databases and web applications to data-analysis software. Some have been introduced in special journal issues or websites with a search function, but others remain scattered throughout the Internet and in the published literature. The searchable resources on these sites are collected and maintained manually and are therefore of higher quality than automatically updated sites, but also require more time and effort. Description We developed an online resource search system called OReFiL to address these issues. We developed a crawler to gather all of the web pages whose URLs appear in MEDLINE abstracts and full-text papers on the BioMed Central open-access journals. The URLs were extracted using regular expressions and rules based on our heuristic knowledge. We then indexed the online resources to facilitate their retrieval and comparison by researchers. Because every online resource has at least one PubMed ID, we can easily acquire its summary with Medical Subject Headings (MeSH) terms and confirm its credibility through reference to the corresponding PubMed entry. In addition, because OReFiL automatically extracts URLs and updates the index, minimal time and effort is needed to maintain the system. Conclusion We developed OReFiL, a search system for online life science resources, which is freely available. The system's distinctive features include the ability to return up-to-date query-relevant online resources introduced in peer-reviewed papers; the ability to search using free words, MeSH terms, or author names; easy verification of each hit following links to the corresponding PubMed entry or to papers citing the URL through the search systems of BioMed Central, Scirus, HighWire Press, or Google Scholar; and quick confirmation of the existence of an online resource web page. PMID:17683589
Nursing Reference Center: a point-of-care resource.
Vardell, Emily; Paulaitis, Gediminas Geddy
2012-01-01
Nursing Reference Center is a point-of-care resource designed for the practicing nurse, as well as nursing administrators, nursing faculty, and librarians. Users can search across multiple resources, including topical Quick Lessons, evidence-based care sheets, patient education materials, practice guidelines, and more. Additional features include continuing education modules, e-books, and a new iPhone application. A sample search and comparison with similar databases were conducted.
The CARMEN software as a service infrastructure.
Weeks, Michael; Jessop, Mark; Fletcher, Martyn; Hodge, Victoria; Jackson, Tom; Austin, Jim
2013-01-28
The CARMEN platform allows neuroscientists to share data, metadata, services and workflows, and to execute these services and workflows remotely via a Web portal. This paper describes how we implemented a service-based infrastructure into the CARMEN Virtual Laboratory. A Software as a Service framework was developed to allow generic new and legacy code to be deployed as services on a heterogeneous execution framework. Users can submit analysis code typically written in Matlab, Python, C/C++ and R as non-interactive standalone command-line applications and wrap them as services in a form suitable for deployment on the platform. The CARMEN Service Builder tool enables neuroscientists to quickly wrap their analysis software for deployment to the CARMEN platform, as a service without knowledge of the service framework or the CARMEN system. A metadata schema describes each service in terms of both system and user requirements. The search functionality allows services to be quickly discovered from the many services available. Within the platform, services may be combined into more complicated analyses using the workflow tool. CARMEN and the service infrastructure are targeted towards the neuroscience community; however, it is a generic platform, and can be targeted towards any discipline.
A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0
Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.
2014-01-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072
A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.
Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P
2014-09-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
One-dimensional nanomaterials for energy storage
NASA Astrophysics Data System (ADS)
Chen, Cheng; Fan, Yuqi; Gu, Jianhang; Wu, Liming; Passerini, Stefano; Mai, Liqiang
2018-03-01
The search for higher energy density, safer, and longer cycling-life energy storage systems is progressing quickly. One-dimensional (1D) nanomaterials have a large length-to-diameter ratio, resulting in their unique electrical, mechanical, magnetic and chemical properties, and have wide applications as electrode materials in different systems. This article reviews the latest hot topics in applying 1D nanomaterials, covering both their synthesis and their applications. 1D nanomaterials can be grouped into the categories: carbon, silicon, metal oxides, and conducting polymers, and we structure our discussion accordingly. Then, we survey the unique properties and application of 1D nanomaterials in batteries and supercapacitors, and provide comments on the progress and advantages of those systems, paving the way for a better understanding of employing 1D nanomaterials for energy storage.
Kaifi, Jussuf T; Kunkel, Miriam; Das, Avisnata; Harouaka, Ramdane A; Dicker, David T; Li, Guangfu; Zhu, Junjia; Clawson, Gary A; Yang, Zhaohai; Reed, Michael F; Gusani, Niraj J; Kimchi, Eric T; Staveley-O'Carroll, Kevin F; Zheng, Si-Yang; El-Deiry, Wafik S
2015-01-01
Colorectal cancer (CRC) metastasectomy improves survival, however most patient develop recurrences. Circulating tumor cells (CTCs) are an independent prognostic marker in stage IV CRC. We hypothesized that CTCs can be enriched during metastasectomy applying different isolation techniques. 25 CRC patients undergoing liver (16 (64%)) or lung (9 (36%)) metastasectomy were prospectively enrolled (clinicaltrial.gov identifier: NCT01722903). Central venous (liver) or radial artery (lung) tumor outflow blood (7.5 ml) was collected at incision, during resection, 30 min after resection, and on postoperative day (POD) 1. CTCs were quantified with 1. EpCAM-based CellSearch® system and 2. size-based isolation with a novel filter device (FMSA). CTCs were immunohistochemically identified using CellSearch®'s criteria (cytokeratin 8/18/19+, CD45- cells containing a nucleus (DAPI+)). CTCs were also enriched with a centrifugation technique (OncoQuick®). CTC numbers peaked during the resection with the FMSA in contrast to CellSearch® (mean CTC number during resection: FMSA: 22.56 (SEM 7.48) (p = 0.0281), CellSearch®: 0.87 (SEM ± 0.44) (p = 0.3018)). Comparing the 2 techniques, CTC quantity was significantly higher with the FMSA device (range 0-101) than CellSearch® (range 0-9) at each of the 4 time points examined (P < 0.05). Immunofluorescence staining of cultured CTCs revealed that CTCs have a combined epithelial (CK8/18/19) and macrophage (CD45/CD14) phenotype. Blood sampling during CRC metastasis resection is an opportunity to increase CTC capture efficiency. CTC isolation with the FMSA yields more CTCs than the CellSearch® system. Future studies should focus on characterization of single CTCs to identify targets for molecular therapy and immune escape mechanisms of cancer cells.
CHERNOLITTM. Chernobyl Bibliographic Search System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caff, F., Jr.; Kennedy, R.A.; Mahaffey, J.A.
1992-03-02
The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world`s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less
Chernobyl Bibliographic Search System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, Jr, F.; Kennedy, R. A.; Mahaffey, J. A.
1992-05-11
The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world''s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less
Polya's bees: A model of decentralized decision-making.
Golman, Russell; Hagmann, David; Miller, John H
2015-09-01
How do social systems make decisions with no single individual in control? We observe that a variety of natural systems, including colonies of ants and bees and perhaps even neurons in the human brain, make decentralized decisions using common processes involving information search with positive feedback and consensus choice through quorum sensing. We model this process with an urn scheme that runs until hitting a threshold, and we characterize an inherent tradeoff between the speed and the accuracy of a decision. The proposed common mechanism provides a robust and effective means by which a decentralized system can navigate the speed-accuracy tradeoff and make reasonably good, quick decisions in a variety of environments. Additionally, consensus choice exhibits systemic risk aversion even while individuals are idiosyncratically risk-neutral. This too is adaptive. The model illustrates how natural systems make decentralized decisions, illuminating a mechanism that engineers of social and artificial systems could imitate.
Polya’s bees: A model of decentralized decision-making
Golman, Russell; Hagmann, David; Miller, John H.
2015-01-01
How do social systems make decisions with no single individual in control? We observe that a variety of natural systems, including colonies of ants and bees and perhaps even neurons in the human brain, make decentralized decisions using common processes involving information search with positive feedback and consensus choice through quorum sensing. We model this process with an urn scheme that runs until hitting a threshold, and we characterize an inherent tradeoff between the speed and the accuracy of a decision. The proposed common mechanism provides a robust and effective means by which a decentralized system can navigate the speed-accuracy tradeoff and make reasonably good, quick decisions in a variety of environments. Additionally, consensus choice exhibits systemic risk aversion even while individuals are idiosyncratically risk-neutral. This too is adaptive. The model illustrates how natural systems make decentralized decisions, illuminating a mechanism that engineers of social and artificial systems could imitate. PMID:26601255
Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search
Song, Kai; Liu, Qi; Wang, Qi
2011-01-01
Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401
Aviation System Analysis Capability Quick Response System Report for Fiscal Year 1998
NASA Technical Reports Server (NTRS)
Ege, Russell; Villani, James; Ritter, Paul
1999-01-01
This document presents the additions and modifications made to the Quick Response System (QRS) in FY 1998 in support of the ASAC QRS development effort. this Document builds upon the Aviation System Analysis Capability Quick Responses System Report for Fiscal Year 1997.
Development of the Subaru-Mitaka-Okayama-Kiso Archive System
NASA Astrophysics Data System (ADS)
Baba, Hajime; Yasuda, Naoki; Ichikawa, Shin-Ichi; Yagi, Masafumi; Iwamoto, Nobuyuki; Takata, Tadafumi; Horaguchi, Toshihiro; Taga, Masatoshi; Watanabe, Masaru; Ozawa, Tomohiko; Hamabe, Masaru
We have developed the Subaru-Mitaka-Okayama-Kiso-Archive (SMOKA) public science archive system which provides access to the data of the Subaru Telescope, the 188 cm telescope at Okayama Astrophysical Observatory, and the 105 cm Schmidt telescope at Kiso Observatory/University of Tokyo. SMOKA is the successor of the MOKA3 system. The user can browse the Quick-Look Images, Header Information (HDI) and the ASCII Table Extension (ATE) of each frame from the search result table. A request for data can be submitted in a simple manner. The system is developed with Java Servlet for the back-end, and Java Server Pages (JSP) for content display. The advantage of JSP's is the separation of the front-end presentation from the middle- and back-end tiers which led to an efficient development of the system. The SMOKA homepage is available at SMOKA
Video Analytics for Indexing, Summarization and Searching of Video Archives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trease, Harold E.; Trease, Lynn L.
This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful"more » content from image and video data.« less
A search for spectral lines in gamma-ray bursts using TGRS
NASA Astrophysics Data System (ADS)
Kurczynski, P.; Palmer, D.; Seifert, H.; Teegarden, B. J.; Gehrels, N.; Cline, T. L.; Ramaty, R.; Hurley, K.; Madden, N. W.; Pehl, R. H.
1998-05-01
We present the results of an ongoing search for narrow spectral lines in gamma-ray burst data. TGRS, the Transient Gamma-Ray Spectrometer aboard the Wind satellite is a high energy-resolution Ge device. Thus it is uniquely situated among the array of space-based, burst sensitive instruments to look for line features in gamma-ray burst spectra. Our search strategy adopts a two tiered approach. An automated `quick look' scan searches spectra for statistically significant deviations from the continuum. We analyzed all possible time accumulations of spectra as well as individual spectra for each burst. Follow-up analysis of potential line candidates uses model fitting with F-test and χ2 tests for statistical significance.
2009-06-01
search engines are not up to this task, as they have been optimized to catalog information quickly and efficiently for user ease of access while promoting retail commerce at the same time. This thesis presents a performance analysis of a new search engine algorithm designed to help find IED education networks using the Nutch open-source search engine architecture. It reveals which web pages are more important via references from other web pages regardless of domain. In addition, this thesis discusses potential evaluation and monitoring techniques to be used in conjunction
Random search optimization based on genetic algorithm and discriminant function
NASA Technical Reports Server (NTRS)
Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.
1990-01-01
The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.
Space communications scheduler: A rule-based approach to adaptive deadline scheduling
NASA Technical Reports Server (NTRS)
Straguzzi, Nicholas
1990-01-01
Job scheduling is a deceptively complex subfield of computer science. The highly combinatorial nature of the problem, which is NP-complete in nearly all cases, requires a scheduling program to intelligently transverse an immense search tree to create the best possible schedule in a minimal amount of time. In addition, the program must continually make adjustments to the initial schedule when faced with last-minute user requests, cancellations, unexpected device failures, quests, cancellations, unexpected device failures, etc. A good scheduler must be quick, flexible, and efficient, even at the expense of generating slightly less-than-optimal schedules. The Space Communication Scheduler (SCS) is an intelligent rule-based scheduling system. SCS is an adaptive deadline scheduler which allocates modular communications resources to meet an ordered set of user-specified job requests on board the NASA Space Station. SCS uses pattern matching techniques to detect potential conflicts through algorithmic and heuristic means. As a result, the system generates and maintains high density schedules without relying heavily on backtracking or blind search techniques. SCS is suitable for many common real-world applications.
The Footprint Database and Web Services of the Herschel Space Observatory
NASA Astrophysics Data System (ADS)
Dobos, László; Varga-Verebélyi, Erika; Verdugo, Eva; Teyssier, David; Exter, Katrina; Valtchanov, Ivan; Budavári, Tamás; Kiss, Csaba
2016-10-01
Data from the Herschel Space Observatory is freely available to the public but no uniformly processed catalogue of the observations has been published so far. To date, the Herschel Science Archive does not contain the exact sky coverage (footprint) of individual observations and supports search for measurements based on bounding circles only. Drawing on previous experience in implementing footprint databases, we built the Herschel Footprint Database and Web Services for the Herschel Space Observatory to provide efficient search capabilities for typical astronomical queries. The database was designed with the following main goals in mind: (a) provide a unified data model for meta-data of all instruments and observational modes, (b) quickly find observations covering a selected object and its neighbourhood, (c) quickly find every observation in a larger area of the sky, (d) allow for finding solar system objects crossing observation fields. As a first step, we developed a unified data model of observations of all three Herschel instruments for all pointing and instrument modes. Then, using telescope pointing information and observational meta-data, we compiled a database of footprints. As opposed to methods using pixellation of the sphere, we represent sky coverage in an exact geometric form allowing for precise area calculations. For easier handling of Herschel observation footprints with rather complex shapes, two algorithms were implemented to reduce the outline. Furthermore, a new visualisation tool to plot footprints with various spherical projections was developed. Indexing of the footprints using Hierarchical Triangular Mesh makes it possible to quickly find observations based on sky coverage, time and meta-data. The database is accessible via a web site http://herschel.vo.elte.hu and also as a set of REST web service functions, which makes it readily usable from programming environments such as Python or IDL. The web service allows downloading footprint data in various formats including Virtual Observatory standards.
Evaluation of parallel reduction strategies for fusion of sensory information from a robot team
NASA Astrophysics Data System (ADS)
Lyons, Damian M.; Leroy, Joseph
2015-05-01
The advantage of using a team of robots to search or to map an area is that by navigating the robots to different parts of the area, searching or mapping can be completed more quickly. A crucial aspect of the problem is the combination, or fusion, of data from team members to generate an integrated model of the search/mapping area. In prior work we looked at the issue of removing mutual robots views from an integrated point cloud model built from laser and stereo sensors, leading to a cleaner and more accurate model. This paper addresses a further challenge: Even with mutual views removed, the stereo data from a team of robots can quickly swamp a WiFi connection. This paper proposes and evaluates a communication and fusion approach based on the parallel reduction operation, where data is combined in a series of steps of increasing subsets of the team. Eight different strategies for selecting the subsets are evaluated for bandwidth requirements using three robot missions, each carried out with teams of four Pioneer 3-AT robots. Our results indicate that selecting groups to combine based on similar pose but distant location yields the best results.
Acute exercise and aerobic fitness influence selective attention during visual search.
Bullock, Tom; Giesbrecht, Barry
2014-01-01
Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention.
Acute exercise and aerobic fitness influence selective attention during visual search
Bullock, Tom; Giesbrecht, Barry
2014-01-01
Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention. PMID:25426094
[Progress in the spectral library based protein identification strategy].
Yu, Derui; Ma, Jie; Xie, Zengyan; Bai, Mingze; Zhu, Yunping; Shu, Kunxian
2018-04-25
Exponential growth of the mass spectrometry (MS) data is exhibited when the mass spectrometry-based proteomics has been developing rapidly. It is a great challenge to develop some quick, accurate and repeatable methods to identify peptides and proteins. Nowadays, the spectral library searching has become a mature strategy for tandem mass spectra based proteins identification in proteomics, which searches the experiment spectra against a collection of confidently identified MS/MS spectra that have been observed previously, and fully utilizes the abundance in the spectrum, peaks from non-canonical fragment ions, and other features. This review provides an overview of the implement of spectral library search strategy, and two key steps, spectral library construction and spectral library searching comprehensively, and discusses the progress and challenge of the library search strategy.
Model Checking with Multi-Threaded IC3 Portfolios
2015-01-15
different runs varies randomly depending on the thread interleaving. The use of a portfolio of solvers to maximize the likelihood of a quick solution is...empirically show (cf. Sec. 5.2) that the predictions based on this formula have high accuracy. Note that each solver in the portfolio potentially searches...speedup of over 300. We also show that widening the proof search of ic3 by randomizing its SAT solver is not as effective as paral- lelization
NASA Astrophysics Data System (ADS)
Song, Z. N.; Sui, H. G.
2018-04-01
High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.
QuickProbs—A Fast Multiple Sequence Alignment Algorithm Designed for Graphics Processors
Gudyś, Adam; Deorowicz, Sebastian
2014-01-01
Multiple sequence alignment is a crucial task in a number of biological analyses like secondary structure prediction, domain searching, phylogeny, etc. MSAProbs is currently the most accurate alignment algorithm, but its effectiveness is obtained at the expense of computational time. In the paper we present QuickProbs, the variant of MSAProbs customised for graphics processors. We selected the two most time consuming stages of MSAProbs to be redesigned for GPU execution: the posterior matrices calculation and the consistency transformation. Experiments on three popular benchmarks (BAliBASE, PREFAB, OXBench-X) on quad-core PC equipped with high-end graphics card show QuickProbs to be 5.7 to 9.7 times faster than original CPU-parallel MSAProbs. Additional tests performed on several protein families from Pfam database give overall speed-up of 6.7. Compared to other algorithms like MAFFT, MUSCLE, or ClustalW, QuickProbs proved to be much more accurate at similar speed. Additionally we introduce a tuned variant of QuickProbs which is significantly more accurate on sets of distantly related sequences than MSAProbs without exceeding its computation time. The GPU part of QuickProbs was implemented in OpenCL, thus the package is suitable for graphics processors produced by all major vendors. PMID:24586435
Byers, J A
1992-09-01
A compiled program, JCE-REFS.EXE (coded in the QuickBASIC language), for use on IBM-compatible personal computers is described. The program converts a DOS text file of current B-I-T-S (BIOSIS Information Transfer System) or BIOSIS Previews references into a DOS file of citations, including abstracts, in a general style used by scientific journals. The latter file can be imported directly into a word processor or the program can convert the file into a random access data base of the references. The program can search the data base for up to 40 text strings with Boolean logic. Selected references in the data base can be exported as a DOS text file of citations. Using the search facility, articles in theJournal of Chemical Ecology from 1975 to 1991 were searched for certain key words in regard to semiochemicals, taxa, methods, chemical classes, and biological terms to determine trends in usage over the period. Positive trends were statistically significant in the use of the words: semiochemical, allomone, allelochemic, deterrent, repellent, plants, angiosperms, dicots, wind tunnel, olfactometer, electrophysiology, mass spectrometry, ketone, evolution, physiology, herbivore, defense, and receptor. Significant negative trends were found for: pheromone, vertebrates, mammals, Coleoptera, Scolytidae,Dendroctonus, lactone, isomer, and calling.
Automated recycling of chemistry for virtual screening and library design.
Vainio, Mikko J; Kogej, Thierry; Raubacher, Florian
2012-07-23
An early stage drug discovery project needs to identify a number of chemically diverse and attractive compounds. These hit compounds are typically found through high-throughput screening campaigns. The diversity of the chemical libraries used in screening is therefore important. In this study, we describe a virtual high-throughput screening system called Virtual Library. The system automatically "recycles" validated synthetic protocols and available starting materials to generate a large number of virtual compound libraries, and allows for fast searches in the generated libraries using a 2D fingerprint based screening method. Virtual Library links the returned virtual hit compounds back to experimental protocols to quickly assess the synthetic accessibility of the hits. The system can be used as an idea generator for library design to enrich the screening collection and to explore the structure-activity landscape around a specific active compound.
Tang, Jian.; Chen, Yuwei.; Jaakkola, Anttoni.; Liu, Jinbing.; Hyyppä, Juha.; Hyyppä, Hannu.
2014-01-01
Laser scan matching with grid-based maps is a promising tool for real-time indoor positioning of mobile Unmanned Ground Vehicles (UGVs). While there are critical implementation problems, such as the ability to estimate the position by sensing the unknown indoor environment with sufficient accuracy and low enough latency for stable vehicle control, further development work is necessary. Unfortunately, most of the existing methods employ heuristics for quick positioning in which numerous accumulated errors easily lead to loss of positioning accuracy. This severely restricts its applications in large areas and over lengthy periods of time. This paper introduces an efficient real-time mobile UGV indoor positioning system for large-area applications using laser scan matching with an improved probabilistically-motivated Maximum Likelihood Estimation (IMLE) algorithm, which is based on a multi-resolution patch-divided grid likelihood map. Compared with traditional methods, the improvements embodied in IMLE include: (a) Iterative Closed Point (ICP) preprocessing, which adaptively decreases the search scope; (b) a totally brute search matching method on multi-resolution map layers, based on the likelihood value between current laser scan and the grid map within refined search scope, adopted to obtain the global optimum position at each scan matching; and (c) a patch-divided likelihood map supporting a large indoor area. A UGV platform called NAVIS was designed, manufactured, and tested based on a low-cost robot integrating a LiDAR and an odometer sensor to verify the IMLE algorithm. A series of experiments based on simulated data and field tests with NAVIS proved that the proposed IMEL algorithm is a better way to perform local scan matching that can offer a quick and stable positioning solution with high accuracy so it can be part of a large area localization/mapping, application. The NAVIS platform can reach an updating rate of 12 Hz in a feature-rich environment and 2 Hz even in a feature-poor environment, respectively. Therefore, it can be utilized in a real-time application. PMID:24999715
Tang, Jian; Chen, Yuwei; Jaakkola, Anttoni; Liu, Jinbing; Hyyppä, Juha; Hyyppä, Hannu
2014-07-04
Laser scan matching with grid-based maps is a promising tool for real-time indoor positioning of mobile Unmanned Ground Vehicles (UGVs). While there are critical implementation problems, such as the ability to estimate the position by sensing the unknown indoor environment with sufficient accuracy and low enough latency for stable vehicle control, further development work is necessary. Unfortunately, most of the existing methods employ heuristics for quick positioning in which numerous accumulated errors easily lead to loss of positioning accuracy. This severely restricts its applications in large areas and over lengthy periods of time. This paper introduces an efficient real-time mobile UGV indoor positioning system for large-area applications using laser scan matching with an improved probabilistically-motivated Maximum Likelihood Estimation (IMLE) algorithm, which is based on a multi-resolution patch-divided grid likelihood map. Compared with traditional methods, the improvements embodied in IMLE include: (a) Iterative Closed Point (ICP) preprocessing, which adaptively decreases the search scope; (b) a totally brute search matching method on multi-resolution map layers, based on the likelihood value between current laser scan and the grid map within refined search scope, adopted to obtain the global optimum position at each scan matching; and (c) a patch-divided likelihood map supporting a large indoor area. A UGV platform called NAVIS was designed, manufactured, and tested based on a low-cost robot integrating a LiDAR and an odometer sensor to verify the IMLE algorithm. A series of experiments based on simulated data and field tests with NAVIS proved that the proposed IMEL algorithm is a better way to perform local scan matching that can offer a quick and stable positioning solution with high accuracy so it can be part of a large area localization/mapping, application. The NAVIS platform can reach an updating rate of 12 Hz in a feature-rich environment and 2 Hz even in a feature-poor environment, respectively. Therefore, it can be utilized in a real-time application.
Development of a Google-based search engine for data mining radiology reports.
Erinjeri, Joseph P; Picus, Daniel; Prior, Fred W; Rubin, David A; Koppel, Paul
2009-08-01
The aim of this study is to develop a secure, Google-based data-mining tool for radiology reports using free and open source technologies and to explore its use within an academic radiology department. A Health Insurance Portability and Accountability Act (HIPAA)-compliant data repository, search engine and user interface were created to facilitate treatment, operations, and reviews preparatory to research. The Institutional Review Board waived review of the project, and informed consent was not required. Comprising 7.9 GB of disk space, 2.9 million text reports were downloaded from our radiology information system to a fileserver. Extensible markup language (XML) representations of the reports were indexed using Google Desktop Enterprise search engine software. A hypertext markup language (HTML) form allowed users to submit queries to Google Desktop, and Google's XML response was interpreted by a practical extraction and report language (PERL) script, presenting ranked results in a web browser window. The query, reason for search, results, and documents visited were logged to maintain HIPAA compliance. Indexing averaged approximately 25,000 reports per hour. Keyword search of a common term like "pneumothorax" yielded the first ten most relevant results of 705,550 total results in 1.36 s. Keyword search of a rare term like "hemangioendothelioma" yielded the first ten most relevant results of 167 total results in 0.23 s; retrieval of all 167 results took 0.26 s. Data mining tools for radiology reports will improve the productivity of academic radiologists in clinical, educational, research, and administrative tasks. By leveraging existing knowledge of Google's interface, radiologists can quickly perform useful searches.
Cancer Imaging: Gene Transcription-Based Imaging and Therapeutic Systems
Bhang, Hyo-eun C.; Pomper, Martin G.
2012-01-01
Molecular-genetic imaging of cancer is in its infancy. Over the past decade gene reporter systems have been optimized in preclinical models and some have found their way into the clinic. The search is on to find the best combination of gene delivery vehicle and reporter imaging system that can be translated safely and quickly. The goal is to have a combination that can detect a wide variety of cancers with high sensitivity and specificity in a way that rivals the current clinical standard, positron emission tomography with [18F]fluorodeoxyglucose. To do so will require systemic delivery of reporter genes for the detection of micrometastases, and a nontoxic vector, whether viral or based on nanotechnology, to gain widespread acceptance by the oncology community. Merger of molecular-genetic imaging with gene therapy, a strategy that has been employed in the past, will likely be necessary for such imaging to reach widespread clinical use. PMID:22349219
New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and UrbIS
NASA Astrophysics Data System (ADS)
Crow, M. C.; Devarakonda, R.; Hook, L.; Killeffer, T.; Krassovski, M.; Boden, T.; King, A. W.; Wullschleger, S. D.
2016-12-01
Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This discussion describes tools being used in two different projects at Oak Ridge National Laboratory (ORNL), but at different stages of the data lifecycle. The Metadata Entry and Data Search Tool is being used for the documentation, archival, and data discovery stages for the Next Generation Ecosystem Experiment - Arctic (NGEE Arctic) project while the Urban Information Systems (UrbIS) Data Catalog is being used to support indexing, cataloging, and searching. The NGEE Arctic Online Metadata Entry Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The UrbIS Data Catalog is a data discovery tool supported by the Mercury cataloging framework [2] which aims to compile urban environmental data from around the world into one location, and be searchable via a user-friendly interface. Each data record conveniently displays its title, source, and date range, and features: (1) a button for a quick view of the metadata, (2) a direct link to the data and, for some data sets, (3) a button for visualizing the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for searching by area. References: [1] Devarakonda, Ranjeet, et al. "Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example." Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. [2] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics, 3(1-2), 87-94.
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2013-02-01
A common search paradigm requires observers to search for a target among undivided spatial arrays of many items. Yet our visual environment is populated with items that are typically arranged within smaller (subdivided) spatial areas outlined by dividers (e.g., frames). It remains unclear how dividers impact visual search performance. In this study, we manipulated the presence and absence of frames and the number of frames subdividing search displays. Observers searched for a target O among Cs, a typically inefficient search task, and for a target C among Os, a typically efficient search. The results indicated that the presence of divider frames in a search display initially interferes with visual search tasks when targets are quickly detected (i.e., efficient search), leading to early interference; conversely, frames later facilitate visual search in tasks in which targets take longer to detect (i.e., inefficient search), leading to late facilitation. Such interference and facilitation appear only for conditions with a specific number of frames. Relative to previous studies of grouping (due to item proximity or similarity), these findings suggest that frame enclosures of multiple items may induce a grouping effect that influences search performance.
Optical follow-up of gravitational wave triggers with DECam
NASA Astrophysics Data System (ADS)
Herner, K.; Annis, J.; Berger, E.; Brout, D.; Butler, R.; Chen, H.; Cowperthwaite, P.; Diehl, H.; Doctor, Z.; Drlica-Wagner, A.; Farr, B.; Finley, D.; Frieman, J.; Holz, D.; Kessler, R.; Lin, H.; Marriner, J.; Nielsen, E.; Palmese, A.; Sako, M.; Soares-Santos, M.; Sobreira, F.; Yanny, B.
2017-10-01
Gravitational wave (GW) events have several possible progenitors, including black hole mergers, cosmic string cusps, supernovae, neutron star mergers, and black hole-neutron star mergers. A subset of GW events are expected to produce electromagnetic (EM) emission that, once detected, will provide complementary information about their astrophysical context. To that end, the LIGO-Virgo Collaboration has partnered with other teams to send GW candidate alerts so that searches for their EM counterparts can be pursued. One such partner is the Dark Energy Survey (DES) and Dark Energy Camera (DECam) Gravitational Waves Program (DES-GW). Situated on the 4m Blanco Telescope at the Cerro Tololo Inter-American Observatory in Chile, DECam is an ideal instrument for optical followup observations of GW triggers in the southern sky. The DES-GW program performs subtraction of new search images with respect to preexisting overlapping images to select candidate sources. Due to the short decay timescale of the expected EM counterparts and the need to quickly eliminate survey areas with no counterpart candidates, it is critical to complete the initial analysis of each night’s images within 24 hours. The computational challenges in achieving this goal include maintaining robust I/O pipelines during the processing, being able to quickly acquire template images of new sky regions outside of the typical DES observing regions, and being able to rapidly provision additional batch computing resources with little advance notice. We will discuss the search area determination, imaging pipeline, general data transfer strategy, and methods to quickly increase the available amount of batch computing. We will present results from the first season of observations from September 2015 to January 2016 and conclude by presenting improvements planned for the second observing season.
Optical follow-up of gravitational wave triggers with DECam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herner, K.; Annis, J.; Berger, E.
Gravitational wave (GW) events have several possible progenitors, including black hole mergers, cosmic string cusps, supernovae, neutron star mergers, and black hole{neutron star mergers. A subset of GW events are expected to produce electromagnetic (EM) emission that, once detected, will provide complementary information about their astrophysical context. To that end, the LIGO-Virgo Collaboration has partnered with other teams to send GW candidate alerts so that searches for their EM counterparts can be pursued. One such partner is the Dark Energy Survey (DES) and Dark Energy Camera (DECam) Gravitational Waves Program (DES- GW). Situated on the 4m Blanco Telescope at themore » Cerro Tololo Inter-American Observatory in Chile, DECam is an ideal instrument for optical followup observations of GW triggers in the southern sky. The DES-GW program performs subtraction of new search images with respect to preexisting overlapping images to select candidate sources. Due to the short decay timescale of the expected EM counterparts and the need to quickly eliminate survey areas with no counterpart candidates, it is critical to complete the initial analysis of each night's images within 24 hours. The computational challenges in achieving this goal include maintaining robust I/O pipelines during the processing, being able to quickly acquire template images of new sky regions outside of the typical DES observing regions, and being able to rapidly provision additional batch computing resources with little advance notice. We will discuss the search area determination, imaging pipeline, general data transfer strategy, and methods to quickly increase the available amount of batch computing. We will present results from the rst season of observations from September 2015 to January 2016 and conclude by presenting improvements planned for the second observing season.« less
Real-time earthquake monitoring using a search engine method.
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-12-04
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.
Real-time earthquake monitoring using a search engine method
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-01-01
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861
Online fingerprint verification.
Upendra, K; Singh, S; Kumar, V; Verma, H K
2007-01-01
As organizations search for more secure authentication methods for user access, e-commerce, and other security applications, biometrics is gaining increasing attention. With an increasing emphasis on the emerging automatic personal identification applications, fingerprint based identification is becoming more popular. The most widely used fingerprint representation is the minutiae based representation. The main drawback with this representation is that it does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Also, it is difficult quickly to match two fingerprint images containing different number of unregistered minutiae points. In this study filter bank based representation, which eliminates these weakness, is implemented and the overall performance of the developed system is tested. The results have shown that this system can be used effectively for secure online verification applications.
Predicting Novel Bulk Metallic Glasses via High- Throughput Calculations
NASA Astrophysics Data System (ADS)
Perim, E.; Lee, D.; Liu, Y.; Toher, C.; Gong, P.; Li, Y.; Simmons, W. N.; Levy, O.; Vlassak, J.; Schroers, J.; Curtarolo, S.
Bulk metallic glasses (BMGs) are materials which may combine key properties from crystalline metals, such as high hardness, with others typically presented by plastics, such as easy processability. However, the cost of the known BMGs poses a significant obstacle for the development of applications, which has lead to a long search for novel, economically viable, BMGs. The emergence of high-throughput DFT calculations, such as the library provided by the AFLOWLIB consortium, has provided new tools for materials discovery. We have used this data to develop a new glass forming descriptor combining structural factors with thermodynamics in order to quickly screen through a large number of alloy systems in the AFLOWLIB database, identifying the most promising systems and the optimal compositions for glass formation. National Science Foundation (DMR-1436151, DMR-1435820, DMR-1436268).
ERIC Educational Resources Information Center
Collins, Mary Ellen
2013-01-01
Quick turnover among chief advancement officers or development leaders is no longer a rare occurrence. Recruitment executives report that half or more of the searches they are conducting result from vacancies caused by vice presidents being asked to resign. And those terminations are not because of fundraising failures. Vice presidential…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-26
... DEPARTMENT OF EDUCATION Notice of Submission for OMB Review; Institute of Education Sciences; Quick Response Information System (QRIS) 2012-2015 System Clearance SUMMARY: The National Center for Education Statistics (NCES) Quick Response Information System (QRIS) consists of the Fast Response Survey...
Software Applications to Access Earth Science Data: Building an ECHO Client
NASA Astrophysics Data System (ADS)
Cohen, A.; Cechini, M.; Pilone, D.
2010-12-01
Historically, developing an ECHO (NASA’s Earth Observing System (EOS) ClearingHOuse) client required interaction with its SOAP API. SOAP, as a framework for web service communication has numerous advantages for Enterprise applications and Java/C# type programming languages. However, as interest has grown for quick development cycles and more intriguing “mashups,” ECHO has seen the SOAP API lose its appeal. In order to address these changing needs, ECHO has introduced two new interfaces facilitating simple access to its metadata holdings. The first interface is built upon the OpenSearch format and ESIP Federated Search framework. The second interface is built upon the Representational State Transfer (REST) architecture. Using the REST and OpenSearch APIs to access ECHO makes development with modern languages much more feasible and simpler. Client developers can leverage the simple interaction with ECHO to focus more of their time on the advanced functionality they are presenting to users. To demonstrate the simplicity of developing with the REST API, participants will be led through a hands-on experience where they will develop an ECHO client that performs the following actions: + Login + Provider discovery + Provider based dataset discovery + Dataset, Temporal, and Spatial constraint based Granule discovery + Online Data Access
NASA Astrophysics Data System (ADS)
Kase, Sue E.; Vanni, Michelle; Knight, Joanne A.; Su, Yu; Yan, Xifeng
2016-05-01
Within operational environments decisions must be made quickly based on the information available. Identifying an appropriate knowledge base and accurately formulating a search query are critical tasks for decision-making effectiveness in dynamic situations. The spreading of graph data management tools to access large graph databases is a rapidly emerging research area of potential benefit to the intelligence community. A graph representation provides a natural way of modeling data in a wide variety of domains. Graph structures use nodes, edges, and properties to represent and store data. This research investigates the advantages of information search by graph query initiated by the analyst and interactively refined within the contextual dimensions of the answer space toward a solution. The paper introduces SLQ, a user-friendly graph querying system enabling the visual formulation of schemaless and structureless graph queries. SLQ is demonstrated with an intelligence analyst information search scenario focused on identifying individuals responsible for manufacturing a mosquito-hosted deadly virus. The scenario highlights the interactive construction of graph queries without prior training in complex query languages or graph databases, intuitive navigation through the problem space, and visualization of results in graphical format.
Quantin, Catherine; Jaquet-Chiffelle, David-Olivier; Coatrieux, Gouenou; Benzenine, Eric; Allaert, François-André
2011-02-01
The purpose of our multidisciplinary study was to define a pragmatic and secure alternative to the creation of a national centralised medical record which could gather together the different parts of the medical record of a patient scattered in the different hospitals where he was hospitalised without any risk of breaching confidentiality. We first analyse the reasons for the failure and the dangers of centralisation (i.e. difficulty to define a European patients' identifier, to reach a common standard for the contents of the medical record, for data protection) and then propose an alternative that uses the existing available data on the basis that setting up a safe though imperfect system could be better than continuing a quest for a mythical perfect information system that we have still not found after a search that has lasted two decades. We describe the functioning of Medical Record Search Engines (MRSEs), using pseudonymisation of patients' identity. The MRSE will be able to retrieve and to provide upon an MD's request all the available information concerning a patient who has been hospitalised in different hospitals without ever having access to the patient's identity. The drawback of this system is that the medical practitioner then has to read all of the information and to create his own synthesis and eventually to reject extra data. Faced with the difficulties and the risks of setting up a centralised medical record system, a system that gathers all of the available information concerning a patient could be of great interest. This low-cost pragmatic alternative which could be developed quickly should be taken into consideration by health authorities. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kluber, Laurel A; Yip, Daniel Z; Yang, Zamin K
This data set provides links to the results of metagenomic analyses of 44 peat samples collected on 13 June 2016 from SPRUCE experiment treatment and ambient plots. Experimental plots had received approximately 24 months of belowground warming (deep peat heating (DPH), Hanson et al. 2015) with the last 9 of those months including air warming for implementation of whole ecosystems warming (WEW – Hanson et al. 2016). WEW Metagenomes: Data from these metagenomes are archived in the U.S. Department of Energy Joint Genome Institute (DOE JGI) Integrated Microbial Genomes (IMG) system (http://img.jgi.doe.gov/) and are available at the accession numbers providedmore » below (Table 2) and in the accompanying inventory file. The easiest way to find results on IMG is at this link, https://img.jgi.doe.gov/cgi-bin/m/main.cgi, and then enter “June2016WEW” as a search term in the “Quick Genome Search:” box at the top of the page.« less
MO/DSD online information server and global information repository access
NASA Technical Reports Server (NTRS)
Nguyen, Diem; Ghaffarian, Kam; Hogie, Keith; Mackey, William
1994-01-01
Often in the past, standards and new technology information have been available only in hardcopy form, with reproduction and mailing costs proving rather significant. In light of NASA's current budget constraints and in the interest of efficient communications, the Mission Operations and Data Systems Directorate (MO&DSD) New Technology and Data Standards Office recognizes the need for an online information server (OLIS). This server would allow: (1) dissemination of standards and new technology information throughout the Directorate more quickly and economically; (2) online browsing and retrieval of documents that have been published for and by MO&DSD; and (3) searching for current and past study activities on related topics within NASA before issuing a task. This paper explores a variety of available information servers and searching tools, their current capabilities and limitations, and the application of these tools to MO&DSD. Most importantly, the discussion focuses on the way this concept could be easily applied toward improving dissemination of standards and new technologies and improving documentation processes.
An Interview with AIDS Vaccine Researcher Chris Parks
ERIC Educational Resources Information Center
Sullivan, Megan
2010-01-01
The search for an AIDS (acquired immune deficiency syndrome) vaccine is truly a global effort, with university laboratories, biotech firms, pharmaceutical companies, nonprofit research organizations, hospitals, and clinics all working together to develop an effective vaccine as quickly as possible. The International AIDS Vaccine Initiative (IAVI)…
ERIC Educational Resources Information Center
National Institute of General Medical Sciences (NIGMS), 2009
2009-01-01
Computer advances now let researchers quickly search through DNA sequences to find gene variations that could lead to disease, simulate how flu might spread through one's school, and design three-dimensional animations of molecules that rival any video game. By teaming computers and biology, scientists can answer new and old questions that could…
Consequences of Common Topological Rearrangements for Partition Trees in Phylogenomic Inference.
Chernomor, Olga; Minh, Bui Quang; von Haeseler, Arndt
2015-12-01
In phylogenomic analysis the collection of trees with identical score (maximum likelihood or parsimony score) may hamper tree search algorithms. Such collections are coined phylogenetic terraces. For sparse supermatrices with a lot of missing data, the number of terraces and the number of trees on the terraces can be very large. If terraces are not taken into account, a lot of computation time might be unnecessarily spent to evaluate many trees that in fact have identical score. To save computation time during the tree search, it is worthwhile to quickly identify such cases. The score of a species tree is the sum of scores for all the so-called induced partition trees. Therefore, if the topological rearrangement applied to a species tree does not change the induced partition trees, the score of these partition trees is unchanged. Here, we provide the conditions under which the three most widely used topological rearrangements (nearest neighbor interchange, subtree pruning and regrafting, and tree bisection and reconnection) change the topologies of induced partition trees. During the tree search, these conditions allow us to quickly identify whether we can save computation time on the evaluation of newly encountered trees. We also introduce the concept of partial terraces and demonstrate that they occur more frequently than the original "full" terrace. Hence, partial terrace is the more important factor of timesaving compared to full terrace. Therefore, taking into account the above conditions and the partial terrace concept will help to speed up the tree search in phylogenomic inference.
Protection of electronic health records (EHRs) in cloud.
Alabdulatif, Abdulatif; Khalil, Ibrahim; Mai, Vu
2013-01-01
EHR technology has come into widespread use and has attracted attention in healthcare institutions as well as in research. Cloud services are used to build efficient EHR systems and obtain the greatest benefits of EHR implementation. Many issues relating to building an ideal EHR system in the cloud, especially the tradeoff between flexibility and security, have recently surfaced. The privacy of patient records in cloud platforms is still a point of contention. In this research, we are going to improve the management of access control by restricting participants' access through the use of distinct encrypted parameters for each participant in the cloud-based database. Also, we implement and improve an existing secure index search algorithm to enhance the efficiency of information control and flow through a cloud-based EHR system. At the final stage, we contribute to the design of reliable, flexible and secure access control, enabling quick access to EHR information.
Conjugating binary systems for spacecraft thermal control
NASA Technical Reports Server (NTRS)
Grodzka, Philomena G.; Dean, William G.; Sisk, Lori A.; Karu, Zain S.
1989-01-01
The materials search was directed to liquid pairs which can form hydrogen bonds of just the right strength, i.e., strong enough to give a high heat of mixing, but weak enough to enable phase change to occur. The cursory studies performed in the area of additive effects indicate that Conjugating Binary (CB) performance can probably be fine-tuned by this means. The Fluid Loop Test Systems (FLTS) tests of candidate CBs indicate that the systems Triethylamine (TEA)/water and propionaldehyde/water show close to the ideal, reversible behavior, at least initially. The Quick Screening Tests QSTs and FLTS tests, however, both suffer from rather severe static due either to inadequate stirring or temperature control. Thus it is not possible to adequately evaluate less than ideal CB performers. Less than ideal performers, it should be noted, may have features that make them better practical CBs than ideal performers. Improvement of the evaluation instrumentation is thus indicated.
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
search.bioPreprint: a discovery tool for cutting edge, preprint biomedical research articles
Iwema, Carrie L.; LaDue, John; Zack, Angela; Chattopadhyay, Ansuman
2016-01-01
The time it takes for a completed manuscript to be published traditionally can be extremely lengthy. Article publication delay, which occurs in part due to constraints associated with peer review, can prevent the timely dissemination of critical and actionable data associated with new information on rare diseases or developing health concerns such as Zika virus. Preprint servers are open access online repositories housing preprint research articles that enable authors (1) to make their research immediately and freely available and (2) to receive commentary and peer review prior to journal submission. There is a growing movement of preprint advocates aiming to change the current journal publication and peer review system, proposing that preprints catalyze biomedical discovery, support career advancement, and improve scientific communication. While the number of articles submitted to and hosted by preprint servers are gradually increasing, there has been no simple way to identify biomedical research published in a preprint format, as they are not typically indexed and are only discoverable by directly searching the specific preprint server websites. To address this issue, we created a search engine that quickly compiles preprints from disparate host repositories and provides a one-stop search solution. Additionally, we developed a web application that bolsters the discovery of preprints by enabling each and every word or phrase appearing on any web site to be integrated with articles from preprint servers. This tool, search.bioPreprint, is publicly available at http://www.hsls.pitt.edu/resources/preprint. PMID:27508060
World of intelligence defense object detection-machine learning (artificial intelligence)
NASA Astrophysics Data System (ADS)
Gupta, Anitya; Kumar, Akhilesh; Bhushan, Vinayak
2018-04-01
This paper proposes a Quick Locale based Convolutional System strategy (Quick R-CNN) for question recognition. Quick R-CNN expands on past work to effectively characterize ob-ject recommendations utilizing profound convolutional systems. Com-pared to past work, Quick R-CNN utilizes a few in-novations to enhance preparing and testing speed while likewise expanding identification precision. Quick R-CNN trains the profound VGG16 arrange 9 quicker than R-CNN, is 213 speedier at test-time, and accomplishes a higher Guide on PASCAL VOC 2012. Contrasted with SPPnet, Quick R-CNN trains VGG16 3 quicker, tests 10 speedier, and is more exact. Quick R-CNN is actualized in Python and C++ (utilizing Caffe) and is accessible under the open-source MIT Permit.
Temple, Meredith D; Kosik, Kenneth S; Steward, Oswald
2002-09-01
This study evaluated the cognitive mapping abilities of rats that spent part of their early development in a microgravity environment. Litters of male and female Sprague-Dawley rat pups were launched into space aboard the National Aeronautics and Space Administration space shuttle Columbia on postnatal day 8 or 14 and remained in space for 16 days. These animals were designated as FLT groups. Two age-matched control groups remained on Earth: those in standard vivarium housing (VIV) and those in housing identical to that aboard the shuttle (AGC). On return to Earth, animals were tested in three different tasks that measure spatial learning ability, the Morris water maze (MWM), and a modified version of the radial arm maze (RAM). Animals were also tested in an open field apparatus to measure general activity and exploratory activity. Performance and search strategies were evaluated in each of these tasks using an automated tracking system. Despite the dramatic differences in early experience, there were remarkably few differences between the FLT groups and their Earth-bound controls in these tasks. FLT animals learned the MWM and RAM as quickly as did controls. Evaluation of search patterns suggested subtle differences in patterns of exploration and in the strategies used to solve the tasks during the first few days of testing, but these differences normalized rapidly. Together, these data suggest that development in an environment without gravity has minimal long-term impact on spatial learning and memory abilities. Any differences due to development in microgravity are quickly reversed after return to earth normal gravity.
NASA Technical Reports Server (NTRS)
Temple, Meredith D.; Kosik, Kenneth S.; Steward, Oswald
2002-01-01
This study evaluated the cognitive mapping abilities of rats that spent part of their early development in a microgravity environment. Litters of male and female Sprague-Dawley rat pups were launched into space aboard the National Aeronautics and Space Administration space shuttle Columbia on postnatal day 8 or 14 and remained in space for 16 days. These animals were designated as FLT groups. Two age-matched control groups remained on Earth: those in standard vivarium housing (VIV) and those in housing identical to that aboard the shuttle (AGC). On return to Earth, animals were tested in three different tasks that measure spatial learning ability, the Morris water maze (MWM), and a modified version of the radial arm maze (RAM). Animals were also tested in an open field apparatus to measure general activity and exploratory activity. Performance and search strategies were evaluated in each of these tasks using an automated tracking system. Despite the dramatic differences in early experience, there were remarkably few differences between the FLT groups and their Earth-bound controls in these tasks. FLT animals learned the MWM and RAM as quickly as did controls. Evaluation of search patterns suggested subtle differences in patterns of exploration and in the strategies used to solve the tasks during the first few days of testing, but these differences normalized rapidly. Together, these data suggest that development in an environment without gravity has minimal long-term impact on spatial learning and memory abilities. Any differences due to development in microgravity are quickly reversed after return to earth normal gravity.
An active visual search interface for Medline.
Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Wilson, Justin; Athey, Brian; Watson, Stanley J; Meng, Fan
2007-01-01
Searching the Medline database is almost a daily necessity for many biomedical researchers. However, available Medline search solutions are mainly designed for the quick retrieval of a small set of most relevant documents. Because of this search model, they are not suitable for the large-scale exploration of literature and the underlying biomedical conceptual relationships, which are common tasks in the age of high throughput experimental data analysis and cross-discipline research. We try to develop a new Medline exploration approach by incorporating interactive visualization together with powerful grouping, summary, sorting and active external content retrieval functions. Our solution, PubViz, is based on the FLEX platform designed for interactive web applications and its prototype is publicly available at: http://brainarray.mbni.med.umich.edu/Brainarray/DataMining/PubViz.
Design of a web portal for interdisciplinary image retrieval from multiple online image resources.
Kammerer, F J; Frankewitsch, T; Prokosch, H-U
2009-01-01
Images play an important role in medicine. Finding the desired images within the multitude of online image databases is a time-consuming and frustrating process. Existing websites do not meet all the requirements for an ideal learning environment for medical students. This work intends to establish a new web portal providing a centralized access point to a selected number of online image databases. A back-end system locates images on given websites and extracts relevant metadata. The images are indexed using UMLS and the MetaMap system provided by the US National Library of Medicine. Specially developed functions allow to create individual navigation structures. The front-end system suits the specific needs of medical students. A navigation structure consisting of several medical fields, university curricula and the ICD-10 was created. The images may be accessed via the given navigation structure or using different search functions. Cross-references are provided by the semantic relations of the UMLS. Over 25,000 images were identified and indexed. A pilot evaluation among medical students showed good first results concerning the acceptance of the developed navigation structures and search features. The integration of the images from different sources into the UMLS semantic network offers a quick and an easy-to-use learning environment.
Adamusiak, Tomasz; Parkinson, Helen; Muilu, Juha; Roos, Erik; van der Velde, Kasper Joeri; Thorisson, Gudmundur A; Byrne, Myles; Pang, Chao; Gollapudi, Sirisha; Ferretti, Vincent; Hillege, Hans; Brookes, Anthony J; Swertz, Morris A
2012-05-01
Genetic and epidemiological research increasingly employs large collections of phenotypic and molecular observation data from high quality human and model organism samples. Standardization efforts have produced a few simple formats for exchange of these various data, but a lightweight and convenient data representation scheme for all data modalities does not exist, hindering successful data integration, such as assignment of mouse models to orphan diseases and phenotypic clustering for pathways. We report a unified system to integrate and compare observation data across experimental projects, disease databases, and clinical biobanks. The core object model (Observ-OM) comprises only four basic concepts to represent any kind of observation: Targets, Features, Protocols (and their Applications), and Values. An easy-to-use file format (Observ-TAB) employs Excel to represent individual and aggregate data in straightforward spreadsheets. The systems have been tested successfully on human biobank, genome-wide association studies, quantitative trait loci, model organism, and patient registry data using the MOLGENIS platform to quickly setup custom data portals. Our system will dramatically lower the barrier for future data sharing and facilitate integrated search across panels and species. All models, formats, documentation, and software are available for free and open source (LGPLv3) at http://www.observ-om.org. © 2012 Wiley Periodicals, Inc.
A Framework for Debugging Geoscience Projects in a High Performance Computing Environment
NASA Astrophysics Data System (ADS)
Baxter, C.; Matott, L.
2012-12-01
High performance computing (HPC) infrastructure has become ubiquitous in today's world with the emergence of commercial cloud computing and academic supercomputing centers. Teams of geoscientists, hydrologists and engineers can take advantage of this infrastructure to undertake large research projects - for example, linking one or more site-specific environmental models with soft computing algorithms, such as heuristic global search procedures, to perform parameter estimation and predictive uncertainty analysis, and/or design least-cost remediation systems. However, the size, complexity and distributed nature of these projects can make identifying failures in the associated numerical experiments using conventional ad-hoc approaches both time- consuming and ineffective. To address these problems a multi-tiered debugging framework has been developed. The framework allows for quickly isolating and remedying a number of potential experimental failures, including: failures in the HPC scheduler; bugs in the soft computing code; bugs in the modeling code; and permissions and access control errors. The utility of the framework is demonstrated via application to a series of over 200,000 numerical experiments involving a suite of 5 heuristic global search algorithms and 15 mathematical test functions serving as cheap analogues for the simulation-based optimization of pump-and-treat subsurface remediation systems.
Google Scholar and the Continuing Education Literature
ERIC Educational Resources Information Center
Howland, Jared L.; Howell, Scott; Wright, Thomas C.; Dickson, Cody
2009-01-01
The recent introduction of Google Scholar has renewed hope that someday a powerful research tool will bring continuing education literature more quickly, freely, and completely to one's computer. The authors suggest that using Google Scholar with other traditional search methods will narrow the research gap between what is discoverable and…
Adult Nutrition Education Materials. January 1982-October 1988. Quick Bibliography Series.
ERIC Educational Resources Information Center
Irving, Holly Berry
This annotated bibliography of materials available from the National Agricultural Library through interlibrary loan to local libraries focuses on nutrition and dietetics as they relate to physical health and special health problems. The bibliography was derived from online searches of the AGRICOLA database, and materials include audiovisuals,…
ERIC Educational Resources Information Center
Caron, Daniel W.
2008-01-01
Mentorship is a valuable learning tool. A quick search of the Internet will result in hundreds of examples of mentorship between students, teachers, and people from industry. In this article, the author describes an e-mentor program used by aerospace students at Kingswoood Regional High School in Wolfeboro, New Hampshire. The author also describes…
Using the Internet To Strengthen Curriculum.
ERIC Educational Resources Information Center
Lewin, Larry
This book helps teachers learn how to bring the Internet's World Wide Web into their classrooms and encourage students to tap into this resource. Using the dozens of examples and strategies provided, teachers can help students: use search engines effectively; quickly find Web sites and understand their content; conduct sound research; think…
Questions Search 165th Airlift Wing: Featured Links Quick Contacts Organization Phone Number Base Locator Chronicle is our way of communicating with our unit members, their families and our extended fan-base. This Air Base command sgt major visits Pennsylvania ANG Oklahoma National Guard provides Joint Terminal
The Unrelenting Search for a Quick Fix
ERIC Educational Resources Information Center
Marinak, Barbara A.
2016-01-01
Cassidy, Ortlieb, and Grote-Garcia (2016) have penned an important and insightful article. This reflection on the impact of the Common Core State Standards (CCSS; National Governors Association [NGA] Center for Best Practices & Council of Chief State School Officers [CCSSO], 2010) through the lens of "What's Hot, What's Not" for two…
Skip to content State of Alaska myAlaska My Government Resident Business in Alaska Visiting Alaska State Employees State of Alaska Search Home Quick Links Departments Commissioners Employee Whitepages State Government Jobs Federal Jobs Starting a Small Business Living Get a Driver License Get a Hunting
Skip to content State of Alaska myAlaska My Government Resident Business in Alaska Visiting Alaska State Employees State of Alaska Search Home Quick Links Departments Commissioners Employee Whitepages State Government Jobs Federal Jobs Starting a Small Business Living Get a Driver License Get a Hunting
[Effect of object consistency in a spatial contextual cueing paradigm].
Takeda, Yuji
2008-04-01
Previous studies demonstrated that attention can be quickly guided to a target location in a visual search task when the spatial configurations of search items and/or the object identities were repeated in the previous trials. This phenomenon is termed contextual cueing. Recently, it was reported that spatial configuration learning and object identity learning occurred independently, when novel contours were used as search items. The present study examined whether this learning occurred independently even when the search items were meaningful. The results showed that the contextual cueing effect was observed even if the relationships between the spatial locations and object identities were jumbled (Experiment 1). However, it disappeared when the search items were changed into geometric patterns (Experiment 2). These results suggest that the spatial configuration can be learned independent of the object identities; however, the use of the learned configuration is restricted by the learning situations.
Urban Typologies: Towards an ORNL Urban Information System (UrbIS)
NASA Astrophysics Data System (ADS)
KC, B.; King, A. W.; Sorokine, A.; Crow, M. C.; Devarakonda, R.; Hilbert, N. L.; Karthik, R.; Patlolla, D.; Surendran Nair, S.
2016-12-01
Urban environments differ in a large number of key attributes; these include infrastructure, morphology, demography, and economic and social variables, among others. These attributes determine many urban properties such as energy and water consumption, greenhouse gas emissions, air quality, public health, sustainability, and vulnerability and resilience to climate change. Characterization of urban environments by a single property such as population size does not sufficiently capture this complexity. In addressing this multivariate complexity one typically faces such problems as disparate and scattered data, challenges of big data management, spatial searching, insufficient computational capacity for data-driven analysis and modelling, and the lack of tools to quickly visualize the data and compare the analytical results across different cities and regions. We have begun the development of an Urban Information System (UrbIS) to address these issues, one that embraces the multivariate "big data" of urban areas and their environments across the United States utilizing the Big Data as a Service (BDaaS) concept. With technological roots in High-performance Computing (HPC), BDaaS is based on the idea of outsourcing computations to different computing paradigms, scalable to super-computers. UrbIS aims to incorporate federated metadata search, integrated modeling and analysis, and geovisualization into a single seamless workflow. The system includes web-based 2D/3D visualization with an iGlobe interface, fast cloud-based and server-side data processing and analysis, and a metadata search engine based on the Mercury data search system developed at Oak Ridge National Laboratory (ORNL). Results of analyses will be made available through web services. We are implementing UrbIS in ORNL's Compute and Data Environment for Science (CADES) and are leveraging ORNL experience in complex data and geospatial projects. The development of UrbIS is being guided by an investigation of urban heat islands (UHI) using high-dimensional clustering and statistics to define urban typologies (types of cities) in an investigation of how UHI vary with urban type across the United States.
Hybrid Co-Evolutionary Motion Planning via Visibility-Based Repair
NASA Technical Reports Server (NTRS)
Dozier, Gerry; McCullough, Shaun; Brown, Edward, Jr.; Homaifar, Abdollah; Bikdash, Mar-wan
1997-01-01
This paper introduces a hybrid co-evolutionary system for global motion planning within unstructured environments. This system combines the concept of co-evolutionary search along with a concept that we refer to as the visibility-based repair to form a hybrid which quickly transforms infeasible motions into feasible ones. Also, this system makes use of a novel representation scheme for the obstacles within an environment. Our hybrid evolutionary system differs from other evolutionary motion planners in that (1) more emphasis is placed on repairing infeasible motions to develop feasible motions rather than using simulated evolution exclusively as a means of discovering feasible motions, (2) a continuous map of the environment is used rather than a discretized map, and (3) it develops global motion plans for multiple mobile destinations by co-evolving populations of sub-global motion plans. In this paper, we demonstrate the effectiveness of this system by using it to solve two challenging motion planning problems where multiple targets try to move away from a point robot.
Evans, Jonathan P; Smith, Chris D; Fine, Nicola F; Porter, Ian; Gangannagaripalli, Jaheeda; Goodwin, Victoria A; Valderas, Jose M
2018-04-01
Clinical rating systems are used as outcome measures in clinical trials and attempt to gauge the patient's view of his or her own health. The choice of clinical rating system should be supported by its performance against established quality standards. A search strategy was developed to identify all studies that reported the use of clinical rating systems in the elbow literature. The strategy was run from inception in Medline Embase and CINHAL. Data extraction identified the date of publication, country of data collection, pathology assessed, and the outcome measure used. We identified 980 studies that reported clinical rating system use. Seventy-two separate rating systems were identified. Forty-one percent of studies used ≥2 separate measures. Overall, 54% of studies used the Mayo Elbow Performance Score (MEPS). For arthroplasty, 82% used MEPS, 17% used Disabilities of Arm, Shoulder and Hand (DASH), and 7% used QuickDASH. For trauma, 66.7% used MEPS, 32% used DASH, and 23% used the Morrey Score. For tendinopathy, 31% used DASH, 23% used Patient-Rated Tennis Elbow Evaluation (PRTEE), and 13% used MEPS. Over time, there was an increased proportional use of the MEPS, DASH, QuickDASH, PRTEE, and the Oxford Elbow Score. This study identified a wide choice and usage of clinical rating systems in the elbow literature. Numerous studies reported measures without a history of either a specific pathology or cross-cultural validation. Interpretability and comparison of outcomes is dependent on the unification of outcome measure choice. This was not demonstrated currently. Copyright © 2018 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth
2015-03-16
Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with "equivalent" 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. © 2015 ARVO.
Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth
2015-01-01
Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with “equivalent” 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. PMID:25780063
GGRNA: an ultrafast, transcript-oriented search engine for genes and transcripts
Naito, Yuki; Bono, Hidemasa
2012-01-01
GGRNA (http://GGRNA.dbcls.jp/) is a Google-like, ultrafast search engine for genes and transcripts. The web server accepts arbitrary words and phrases, such as gene names, IDs, gene descriptions, annotations of gene and even nucleotide/amino acid sequences through one simple search box, and quickly returns relevant RefSeq transcripts. A typical search takes just a few seconds, which dramatically enhances the usability of routine searching. In particular, GGRNA can search sequences as short as 10 nt or 4 amino acids, which cannot be handled easily by popular sequence analysis tools. Nucleotide sequences can be searched allowing up to three mismatches, or the query sequences may contain degenerate nucleotide codes (e.g. N, R, Y, S). Furthermore, Gene Ontology annotations, Enzyme Commission numbers and probe sequences of catalog microarrays are also incorporated into GGRNA, which may help users to conduct searches by various types of keywords. GGRNA web server will provide a simple and powerful interface for finding genes and transcripts for a wide range of users. All services at GGRNA are provided free of charge to all users. PMID:22641850
GGRNA: an ultrafast, transcript-oriented search engine for genes and transcripts.
Naito, Yuki; Bono, Hidemasa
2012-07-01
GGRNA (http://GGRNA.dbcls.jp/) is a Google-like, ultrafast search engine for genes and transcripts. The web server accepts arbitrary words and phrases, such as gene names, IDs, gene descriptions, annotations of gene and even nucleotide/amino acid sequences through one simple search box, and quickly returns relevant RefSeq transcripts. A typical search takes just a few seconds, which dramatically enhances the usability of routine searching. In particular, GGRNA can search sequences as short as 10 nt or 4 amino acids, which cannot be handled easily by popular sequence analysis tools. Nucleotide sequences can be searched allowing up to three mismatches, or the query sequences may contain degenerate nucleotide codes (e.g. N, R, Y, S). Furthermore, Gene Ontology annotations, Enzyme Commission numbers and probe sequences of catalog microarrays are also incorporated into GGRNA, which may help users to conduct searches by various types of keywords. GGRNA web server will provide a simple and powerful interface for finding genes and transcripts for a wide range of users. All services at GGRNA are provided free of charge to all users.
Implementation of a Big Data Accessing and Processing Platform for Medical Records in Cloud.
Yang, Chao-Tung; Liu, Jung-Chun; Chen, Shuo-Tsung; Lu, Hsin-Wen
2017-08-18
Big Data analysis has become a key factor of being innovative and competitive. Along with population growth worldwide and the trend aging of population in developed countries, the rate of the national medical care usage has been increasing. Due to the fact that individual medical data are usually scattered in different institutions and their data formats are varied, to integrate those data that continue increasing is challenging. In order to have scalable load capacity for these data platforms, we must build them in good platform architecture. Some issues must be considered in order to use the cloud computing to quickly integrate big medical data into database for easy analyzing, searching, and filtering big data to obtain valuable information.This work builds a cloud storage system with HBase of Hadoop for storing and analyzing big data of medical records and improves the performance of importing data into database. The data of medical records are stored in HBase database platform for big data analysis. This system performs distributed computing on medical records data processing through Hadoop MapReduce programming, and to provide functions, including keyword search, data filtering, and basic statistics for HBase database. This system uses the Put with the single-threaded method and the CompleteBulkload mechanism to import medical data. From the experimental results, we find that when the file size is less than 300MB, the Put with single-threaded method is used and when the file size is larger than 300MB, the CompleteBulkload mechanism is used to improve the performance of data import into database. This system provides a web interface that allows users to search data, filter out meaningful information through the web, and analyze and convert data in suitable forms that will be helpful for medical staff and institutions.
Linder, Suzanne K; Kamath, Geetanjali R; Pratt, Gregory F; Saraykar, Smita S; Volk, Robert J
2015-04-01
To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a health care decision-making instrument commonly used in clinical settings. We searched the literature using two methods: (1) keyword searching using variations of "Control Preferences Scale" and (2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, and Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Keyword searches in bibliographic databases yielded high average precision (90%) but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45-54%), but precision ranged from 35% to 75% with Scopus being the most precise. Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time, and resources should dictate the combination of which methods and databases are used. Copyright © 2015 Elsevier Inc. All rights reserved.
Linder, Suzanne K.; Kamath, Geetanjali R.; Pratt, Gregory F.; Saraykar, Smita S.; Volk, Robert J.
2015-01-01
Objective To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a healthcare decision-making instrument commonly used in clinical settings. Study Design & Setting We searched the literature using two methods: 1) keyword searching using variations of “control preferences scale” and 2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Results Keyword searches in bibliographic databases yielded high average precision (90%), but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45–54%), but precision ranged from 35–75% with Scopus being the most precise. Conclusion Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time and resources should dictate the combination of which methods and databases are used. PMID:25554521
An adaptive grid algorithm for 3-D GIS landform optimization based on improved ant algorithm
NASA Astrophysics Data System (ADS)
Wu, Chenhan; Meng, Lingkui; Deng, Shijun
2005-07-01
The key technique of 3-D GIS is to realize quick and high-quality 3-D visualization, in which 3-D roaming system based on landform plays an important role. However how to increase efficiency of 3-D roaming engine and process a large amount of landform data is a key problem in 3-D landform roaming system and improper process of the problem would result in tremendous consumption of system resources. Therefore it has become the key of 3-D roaming system design that how to realize high-speed process of distributed data for landform DEM (Digital Elevation Model) and high-speed distributed modulation of various 3-D landform data resources. In the paper we improved the basic ant algorithm and designed the modulation strategy of 3-D GIS landform resources based on the improved ant algorithm. By initially hypothetic road weights σi , the change of the information factors in the original algorithm would transform from ˜τj to ∆τj+σi and the weights was decided by 3-D computative capacity of various nodes in network environment. So during the course of initial phase of task assignment, increasing the resource information factors of high task-accomplishing rate and decreasing ones of low accomplishing rate would make load accomplishing rate approach the same value as quickly as possible, then in the later process of task assignment, the load balanced ability of the system was further improved. Experimental results show by improving ant algorithm, our system not only decreases many disadvantage of the traditional ant algorithm, but also like ants looking for food effectively distributes the complicated landform algorithm to many computers to process cooperatively and gains a satisfying search result.
Development and evaluation of the quick anaero-system-a new disposable anaerobic culture system.
Yang, Nam Woong; Kim, Jin Man; Choi, Gwang Ju; Jang, Sook Jin
2010-04-01
We developed a new disposable anaerobic culture system, namely, the Quick anaero-system, for easy culturing of obligate anaerobes. Our system consists of 3 components: 1) new disposable anaerobic gas pack, 2) disposable culture-envelope and sealer, and 3) reusable stainless plate rack with mesh containing 10 g of palladium catalyst pellets. To evaluate the efficiency of our system, we used 12 anaerobic bacteria. We prepared 2 sets of ten-fold serial dilutions of the 12 anaerobes, and inoculated these samples on Luria-Bertani (LB) broth and LB blood agar plate (LB-BAP) (BD Diagnostic Systems, USA). Each set was incubated in the Quick anaero-system (DAS Tech, Korea) and BBL GasPak jar with BD GasPak EZ Anaerobe Container System (BD Diagnostic Systems) at 35-37 degrees C for 48 hr. The minimal inoculum size showing visible growth of 12 anaerobes when incubated in both the systems was compared. The minimal inoculum size showing visible growth for 2 out of the 12 anaerobes in the LB broth and 9 out of the 12 anaerobes on LB-BAP was lower for the Quick anaero-system than in the BD GasPak EZ Anaerobe Container System. The mean time (+/-SD) required to achieve absolute anaerobic conditions of the Quick anaero-system was 17 min and 56 sec (+/-3 min and 25 sec). The Quick anaero-system is a simple and effective method of culturing obligate anaerobes, and its performance is superior to that of the BD GasPak EZ Anaerobe Container System.
Fast, Inclusive Searches for Geographic Names Using Digraphs
Donato, David I.
2008-01-01
An algorithm specifies how to quickly identify names that approximately match any specified name when searching a list or database of geographic names. Based on comparisons of the digraphs (ordered letter pairs) contained in geographic names, this algorithmic technique identifies approximately matching names by applying an artificial but useful measure of name similarity. A digraph index enables computer name searches that are carried out using this technique to be fast enough for deployment in a Web application. This technique, which is a member of the class of n-gram algorithms, is related to, but distinct from, the soundex, PHONIX, and metaphone phonetic algorithms. Despite this technique's tendency to return some counterintuitive approximate matches, it is an effective aid for fast, inclusive searches for geographic names when the exact name sought, or its correct spelling, is unknown.
NASA Astrophysics Data System (ADS)
Lauer, Tod R.; Throop, Henry B.; Showalter, Mark R.; Weaver, Harold A.; Stern, S. Alan; Spencer, John R.; Buie, Marc W.; Hamilton, Douglas P.; Porter, Simon B.; Verbiscer, Anne J.; Young, Leslie A.; Olkin, Cathy B.; Ennico, Kimberly; New Horizons Science Team
2018-02-01
We conducted an extensive search for dust or debris rings in the Pluto-Charon system before, during, and after the New Horizons encounter in July 2015. Methodologies included attempting to detect features by back-scattered light during the approach to Pluto (phase angle α ∼ 15°), in situ detection of impacting particles, a search for stellar occultations near the time of closest approach, and by forward-scattered light imaging during departure (α ∼ 165°). An extensive search using the Hubble Space Telescope (HST) prior to the encounter also contributed to the final ring limits. No rings, debris, or dust features were observed, but our new detection limits provide a substantially improved picture of the environment throughout the Pluto-Charon system. Searches for rings in back-scattered light covered the range 35,000-250,000 km from the system barycenter, a zone that starts interior to the orbit of Styx, the innermost minor satellite, and extends out to four times the orbital radius of Hydra, the outermost known satellite. We obtained our firmest limits using data from the New Horizons LORRI camera in the inner half of this region. Our limits on the normal I/F of an unseen ring depends on the radial scale of the rings: 2 ×10-8 (3σ) for 1500 km wide rings, 1 ×10-8 for 6000 km rings, and 7 ×10-9 for 12,000 km rings. Beyond ∼ 100, 000 km from Pluto, HST observations limit normal I/F to ∼ 8 ×10-8 . Searches for dust features from forward-scattered light extended from the surface of Pluto to the Pluto-Charon Hill sphere (rHill = 6.4 ×106 km). No evidence for rings or dust clouds was detected to normal I/F limits of ∼ 8.9 ×10-7 on ∼ 104 km scales. Four stellar occulation observations also probed the space interior to Hydra, but again no dust or debris was detected. The Student Dust Counter detected one particle impact 3.6 × 106 km from Pluto, but this is consistent with the interplanetary space environment established during the cruise of New Horizons. Elsewhere in the solar system, small moons commonly share their orbits with faint dust rings. Our results support recent dynamical studies suggesting that small grains are quickly lost from the Pluto-Charon system due to solar radiation pressure, whereas larger particles are orbitally unstable due to ongoing perturbations by the known moons.
Identification and expression of the protein ubiquitination system in Giardia intestinalis.
Gallego, Eva; Alvarado, Magda; Wasserman, Moises
2007-06-01
Giardia intestinalis is a single-cell eukaryotic microorganism, regarded as one of the earliest divergent eukaryotes and thus an attractive model to study the evolution of regulatory systems. Giardia has two different forms throughout its life cycle, cyst and trophozoite, and changes from one to the other in response to environmental signals. The two differentiation processes involve a differential gene expression as well as a quick and specific protein turnover that may be mediated by the ubiquitin/proteasome system. The aim of this work was to search for unreported components of the ubiquitination system and to experimentally demonstrate their expression in the parasite and during the two differentiation processes. We found activity of protein ubiquitination in G. intestinalis trophozoites and analyzed the transcription of the ubiquitin gene, as well as that of the activating (E1), conjugating (E2), and ligase (E3) ubiquitin enzymes during encystation and excystation. A constant ubiquitin expression persisted during the parasite's differentiation processes, whereas variation in transcription was observed in the other genes under study.
ERIC Educational Resources Information Center
Webber, Nancy
2004-01-01
Many art teachers use the Web as an information source. Overall, they look for good content that is clearly written concise, accurate, and pertinent. A well-designed site gives users what they want quickly, efficiently, and logically, and does not ask them to assemble a puzzle to resolve their search. How can websites with these qualities be…
Role of online journals and peer-reviewed research
Robert L. Deal
2014-01-01
The recent explosion of online journals has lead some researchers, scientist and academics to reconsider their traditional venues for publishing research. These on-line journals have the potential for quickly disseminating research, but they also present lots of uncertainty, confusion, and pitfalls for researchers. Many academics search out journals with the high...
Population Migration in Rural Areas, January 1979-December 1988. Quick Bibliography Series.
ERIC Educational Resources Information Center
La Caille John, Patricia, Comp.
This bibliography consists of 87 entries of materials related to population trends in rural and nonmetropolitan areas. This collection is the result of a computerized search of the AGRICOLA database. The bibliography covers topics of rural population change, migration and migrants, farm labor supplies and social conditions, and different patterns…
Using Internet Based Paraphrasing Tools: Original Work, Patchwriting or Facilitated Plagiarism?
ERIC Educational Resources Information Center
Rogerson, Ann M.; McCarthy, Grace
2017-01-01
A casual comment by a student alerted the authors to the existence and prevalence of Internet-based paraphrasing tools. A subsequent quick Google search highlighted the broad range and availability of online paraphrasing tools which offer free 'services' to paraphrase large sections of text ranging from sentences, paragraphs, whole articles, book…
Berkeley Lab Berkeley Lab A-Z Index Phone Book Jobs Search DOE Help Berkeley Lab Training Welcome Welcome to Berkeley Lab Training! Login to access your LBNL Training Profile. This provides quick access to all of the courses you need. Look below, to learn about different types of training available at
ERIC Educational Resources Information Center
Thompson, Tommy
2002-01-01
Notes that despite having access to vast nutritional knowledge, Americans today are more malnourished and obese than ever before. Concludes that eating normal, basic, ordinary foods in variety can supply all nutritional needs; gimmicks are not needed, and the search for the "quick-fix" must stop--it is not on any shelf. Includes the United States…
Book Newsroom Newsroom News and features Press releases Photo gallery Fact sheets and brochures Media Scientist June 3, 2018, 1:00 pm Fermilab news Search Upcoming events May 27 Sun English Country Dancing Kuhn Security, Privacy, Legal Use of Cookies Quick Links Home Contact Phone Book Fermilab at Work For Industry
Avoid the Void: Quick and Easy Site Submission Strategies.
ERIC Educational Resources Information Center
Sullivan, Danny
2000-01-01
Explains how to submit Web sites and promote them to make them more findable by search engines. Discusses submitting to Yahoo!; the Open Directory and other human-powered directories; proper tagging with HTML; designing pages to improve the number indexed; and submitting additional pages as well as the home page. (LRW)
Selecting Digital Children's Books: An Interview Study
ERIC Educational Resources Information Center
Schlebbe, Kirsten
2018-01-01
Introduction: The market for digital children's books is growing steadily. But the options vary in terms of quality and parents looking for suitable apps for young children can get overwhelmed quickly. This study seeks to answer how families search for suitable applications and what aspects are important to them while selecting. Method:…
Facts About Alaska, Alaska Kids' Corner, State of Alaska
Skip to content State of Alaska myAlaska My Government Resident Business in Alaska Visiting Alaska State Employees State of Alaska Search Home Quick Links Departments Commissioners Employee Whitepages State Government Jobs Federal Jobs Starting a Small Business Living Get a Driver License Get a Hunting
Searching for the Golden Fleece: The Epic Struggle Continues.
ERIC Educational Resources Information Center
Achilles, C. M.
The task of improving educational administrator preparation is one of epic proportions. The magnitude of the task is expressed using allusions to the epic style and the myth of Jason and the Golden Fleece. The paper presents a quick "environmental scan" to determine how visionary and revolutionary are some current ideas for improvements…
Consequences of Common Topological Rearrangements for Partition Trees in Phylogenomic Inference
Minh, Bui Quang; von Haeseler, Arndt
2015-01-01
Abstract In phylogenomic analysis the collection of trees with identical score (maximum likelihood or parsimony score) may hamper tree search algorithms. Such collections are coined phylogenetic terraces. For sparse supermatrices with a lot of missing data, the number of terraces and the number of trees on the terraces can be very large. If terraces are not taken into account, a lot of computation time might be unnecessarily spent to evaluate many trees that in fact have identical score. To save computation time during the tree search, it is worthwhile to quickly identify such cases. The score of a species tree is the sum of scores for all the so-called induced partition trees. Therefore, if the topological rearrangement applied to a species tree does not change the induced partition trees, the score of these partition trees is unchanged. Here, we provide the conditions under which the three most widely used topological rearrangements (nearest neighbor interchange, subtree pruning and regrafting, and tree bisection and reconnection) change the topologies of induced partition trees. During the tree search, these conditions allow us to quickly identify whether we can save computation time on the evaluation of newly encountered trees. We also introduce the concept of partial terraces and demonstrate that they occur more frequently than the original “full” terrace. Hence, partial terrace is the more important factor of timesaving compared to full terrace. Therefore, taking into account the above conditions and the partial terrace concept will help to speed up the tree search in phylogenomic inference. PMID:26448206
Shedlock, James; Frisque, Michelle; Hunt, Steve; Walton, Linda; Handler, Jonathan; Gillam, Michael
2010-04-01
How can the user's access to health information, especially full-text articles, be improved? The solution is building and evaluating the Health SmartLibrary (HSL). The setting is the Galter Health Sciences Library, Feinberg School of Medicine, Northwestern University. The HSL was built on web-based personalization and customization tools: My E-Resources, Stay Current, Quick Search, and File Cabinet. Personalization and customization data were tracked to show user activity with these value-added, online services. Registration data indicated that users were receptive to personalized resource selection and that the automated application of specialty-based, personalized HSLs was more frequently adopted than manual customization by users. Those who did customize customized My E-Resources and Stay Current more often than Quick Search and File Cabinet. Most of those who customized did so only once. Users did not always take advantage of the services designed to aid their library research experiences. When personalization is available at registration, users readily accepted it. Customization tools were used less frequently; however, more research is needed to determine why this was the case.
Research Trend Visualization by MeSH Terms from PubMed.
Yang, Heyoung; Lee, Hyuck Jai
2018-05-30
Motivation : PubMed is a primary source of biomedical information comprising search tool function and the biomedical literature from MEDLINE which is the US National Library of Medicine premier bibliographic database, life science journals and online books. Complimentary tools to PubMed have been developed to help the users search for literature and acquire knowledge. However, these tools are insufficient to overcome the difficulties of the users due to the proliferation of biomedical literature. A new method is needed for searching the knowledge in biomedical field. Methods : A new method is proposed in this study for visualizing the recent research trends based on the retrieved documents corresponding to a search query given by the user. The Medical Subject Headings (MeSH) are used as the primary analytical element. MeSH terms are extracted from the literature and the correlations between them are calculated. A MeSH network, called MeSH Net, is generated as the final result based on the Pathfinder Network algorithm. Results : A case study for the verification of proposed method was carried out on a research area defined by the search query (immunotherapy and cancer and "tumor microenvironment"). The MeSH Net generated by the method is in good agreement with the actual research activities in the research area (immunotherapy). Conclusion : A prototype application generating MeSH Net was developed. The application, which could be used as a "guide map for travelers", allows the users to quickly and easily acquire the knowledge of research trends. Combination of PubMed and MeSH Net is expected to be an effective complementary system for the researchers in biomedical field experiencing difficulties with search and information analysis.
Friederichs, Hendrik; Marschall, Bernhard; Weissenstein, Anne
2014-12-05
Practicing evidence-based medicine is an important aspect of providing good medical care. Accessing external information through literature searches on computer-based systems can effectively achieve integration in clinical care. We conducted a pilot study using smartphones, tablets, and stationary computers as search devices at the bedside. The objective was to determine possible differences between the various devices and assess students' internet use habits. In a randomized controlled pilot study, 120 students were divided in three groups. One control group solved clinical problems on a computer and two intervention groups used mobile devices at the bedside. In a questionnaire, students were asked to report their internet use habits as well as their satisfaction with their respective search tool using a 5-point Likert scale. Of 120 surveys, 94 (78.3%) complete data sets were analyzed. The mobility of the tablet (3.90) and the smartphone (4.39) was seen as a significant advantage over the computer (2.38, p < .001). However, for performing an effective literature search at the bedside, the computer (3.22) was rated superior to both tablet computers (2.13) and smartphones (1.68). No significant differences were detected between tablets and smartphones except satisfaction with screen size (tablet 4.10, smartphone 2.00, p < .001). Using a mobile device at the bedside to perform an extensive search is not suitable for students who prefer using computers. However, mobility is regarded as a substantial advantage, and therefore future applications might facilitate quick and simple searches at the bedside.
Studying quick coupler efficiency in working attachment system of single-bucket power shovel
NASA Astrophysics Data System (ADS)
Duganova, E. V.; Zagorodniy, N. A.; Solodovnikov, D. N.; Korneyev, A. S.
2018-03-01
A prototype of a quick-disconnect connector (quick coupler) with an unloaded retention mechanism was developed from the analysis of typical quick couplers used as intermediate elements for power shovels of different manufacturers. A method is presented, allowing building a simulation model of the quick coupler prototype as an alternative to physical modeling for further studies.
Finding and applying evidence during clinical rounds: the "evidence cart".
Sackett, D L; Straus, S E
1998-10-21
Physicians need easy access to evidence for clinical decisions while they care for patients but, to our knowledge, no investigators have assessed use of evidence during rounds with house staff. To determine if it was feasible to find and apply evidence during clinical rounds, using an "evidence cart" that contains multiple sources of evidence and the means for projecting and printing them. Descriptive feasibility study of use of evidence during 1 month (April 1997) and anonymous questionnaire (May 1997). General medicine inpatient service. Medical students, house staff, fellows, and attending consultant. Evidence cart that included 2 secondary sources developed by the department (critically appraised topics [CATs] and Redbook), Best Evidence, JAMA Rational Clinical Examination series, the Cochrane Library, MEDLINE, a physical examination textbook, a radiology anatomy textbook, and a Simulscope, which allows several people to listen simultaneously to the same signs on physical examination. Number of times sources were used, type of sources searched and success of searches, time needed to search, and whether the search affected patient care. The evidence cart was used 98 times, but could not be taken on bedside rounds because of its bulk; hard copies of several sources were taken instead. When the evidence cart was used during team rounds and student rounds, some sources could be accessed quickly enough (10.2-25.4 seconds) to be practical on our service. Of 98 searches, 79 (81%) sought evidence that could affect diagnostic and/or treatment decisions. Seventy-one (90%) of 79 searches regarding patient management were successful, and when assessed from the perspective of the most junior team members responsible for each patient's evaluation and management, 37 (52%) of the 71 successful searches confirmed their current or tentative diagnostic or treatment plans, 18 (25%) led to a new diagnostic skill, an additional test, or a new management decision, and 16 (23%) corrected a previous clinical skill, diagnostic test, or treatment. When the cart was removed, the perceived need for evidence rose sharply, but a search for it was carried out only 12% of the time (5 searches performed out of the 41 times evidence was needed). Making evidence quickly available to clinicians on a busy medical inpatient service using an evidence cart increased the extent to which evidence was sought and incorporated into patient care decisions.
ListingAnalyst: A program for analyzing the main output file from MODFLOW
Winston, Richard B.; Paulinski, Scott
2014-01-01
ListingAnalyst is a Windows® program for viewing the main output file from MODFLOW-2005, MODFLOW-NWT, or MODFLOW-LGR. It organizes and displays large files quickly without using excessive memory. The sections and subsections of the file are displayed in a tree-view control, which allows the user to navigate quickly to desired locations in the files. ListingAnalyst gathers error and warning messages scattered throughout the main output file and displays them all together in an error and a warning tab. A grid view displays tables in a readable format and allows the user to copy the table into a spreadsheet. The user can also search the file for terms of interest.
NASA Astrophysics Data System (ADS)
Ervin, Katherine; Shipman, Steven
2017-06-01
While rotational spectra can be rapidly collected, their analysis (especially for complex systems) is seldom straightforward, leading to a bottleneck. The AUTOFIT program was designed to serve that need by quickly matching rotational constants to spectra with little user input and supervision. This program can potentially be improved by incorporating an optimization algorithm in the search for a solution. The Particle Swarm Optimization Algorithm (PSO) was chosen for implementation. PSO is part of a family of optimization algorithms called heuristic algorithms, which seek approximate best answers. This is ideal for rotational spectra, where an exact match will not be found without incorporating distortion constants, etc., which would otherwise greatly increase the size of the search space. PSO was tested for robustness against five standard fitness functions and then applied to a custom fitness function created for rotational spectra. This talk will explain the Particle Swarm Optimization algorithm and how it works, describe how Autofit was modified to use PSO, discuss the fitness function developed to work with spectroscopic data, and show our current results. Seifert, N.A., Finneran, I.A., Perez, C., Zaleski, D.P., Neill, J.L., Steber, A.L., Suenram, R.D., Lesarri, A., Shipman, S.T., Pate, B.H., J. Mol. Spec. 312, 13-21 (2015)
Self-Tuning of Design Variables for Generalized Predictive Control
NASA Technical Reports Server (NTRS)
Lin, Chaung; Juang, Jer-Nan
2000-01-01
Three techniques are introduced to determine the order and control weighting for the design of a generalized predictive controller. These techniques are based on the application of fuzzy logic, genetic algorithms, and simulated annealing to conduct an optimal search on specific performance indexes or objective functions. Fuzzy logic is found to be feasible for real-time and on-line implementation due to its smooth and quick convergence. On the other hand, genetic algorithms and simulated annealing are applicable for initial estimation of the model order and control weighting, and final fine-tuning within a small region of the solution space, Several numerical simulations for a multiple-input and multiple-output system are given to illustrate the techniques developed in this paper.
Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin
Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.
2006-01-01
The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.
Introduction to Data Acquisition 3.Let’s Acquire Data!
NASA Astrophysics Data System (ADS)
Nakanishi, Hideya; Okumura, Haruhiko
In fusion experiments, diagnostic control and logging devices are usually connected through the field bus, e.g. GP-IB. Internet technologies are often applied for their remote operation. All equipment and digitizers are driven by pre-programmed sequences, in which clocks and triggers give the essential timing for data acquisition. Data production rate and amount must be checked in comparison with the transfer and store rates. To store binary raw data safely, journaling file systems are preferably used with redundant disks (RAID) or mirroring mechanism, such as “rsync”. A proper choice of the data compression method not only reduces the storage size but also improves the I/O throughputs. DBMS is even applicable to quick search or security around the table data.
Improving Spacecraft Data Visualization Using Splunk
NASA Technical Reports Server (NTRS)
Conte, Matthew
2012-01-01
EPOXI, like all spacecraft missions, receives large volumes of telemetry data from its spacecraft, DIF. It is extremely important for this data to be updated quickly and presented in a readable manner so that the flight team can monitor the status of the spacecraft. Existing DMD pages for monitoring spacecraft telemetry, while functional, are limited and do not take advantage of modern search technology. For instance, they only display current data points from instruments on the spacecraft and have limited graphing capabilities, making it difficult to see historical data. The DMD pages have fixed refresh rates so the team must often wait several minutes to see the most recent data, even after it is received on the ground. The pages are also rigid and require an investment of time and money to update. To more easily organize and visualize spacecraft telemetry, the EPOXI team has begun experimenting with Splunk, a commercially-available data mining system. Splunk can take data received from the spacecraft's different data channels, often in different formats, and index all the data into a common format. Splunk allows flight team members to search through the different data formats from a single interface and to filter results by time range and data field to make finding specific spacecraft events quick and easy. Furthermore, Splunk provides functions to create custom interfaces which help team members visualize the data in charts and graphs to show how the health of the spacecraft has changed over time.One of the goals of my internship with my mentor, Victor Hwang, was to develop new Splunk interfaces to replace the DMD pages and give the spacecraft team access to historical data and visualizations that were previously unavailable. The specific requirements of these pages are discussed in the next section.
Krasowski, Matthew D.; Schriever, Andy; Mathur, Gagan; Blau, John L.; Stauffer, Stephanie L.; Ford, Bradley A.
2015-01-01
Background: Pathology data contained within the electronic health record (EHR), and laboratory information system (LIS) of hospitals represents a potentially powerful resource to improve clinical care. However, existing reporting tools within commercial EHR and LIS software may not be able to efficiently and rapidly mine data for quality improvement and research applications. Materials and Methods: We present experience using a data warehouse produced collaboratively between an academic medical center and a private company. The data warehouse contains data from the EHR, LIS, admission/discharge/transfer system, and billing records and can be accessed using a self-service data access tool known as Starmaker. The Starmaker software allows users to use complex Boolean logic, include and exclude rules, unit conversion and reference scaling, and value aggregation using a straightforward visual interface. More complex queries can be achieved by users with experience with Structured Query Language. Queries can use biomedical ontologies such as Logical Observation Identifiers Names and Codes and Systematized Nomenclature of Medicine. Result: We present examples of successful searches using Starmaker, falling mostly in the realm of microbiology and clinical chemistry/toxicology. The searches were ones that were either very difficult or basically infeasible using reporting tools within the EHR and LIS used in the medical center. One of the main strengths of Starmaker searches is rapid results, with typical searches covering 5 years taking only 1–2 min. A “Run Count” feature quickly outputs the number of cases meeting criteria, allowing for refinement of searches before downloading patient-identifiable data. The Starmaker tool is available to pathology residents and fellows, with some using this tool for quality improvement and scholarly projects. Conclusion: A data warehouse has significant potential for improving utilization of clinical pathology testing. Software that can access data warehouse using a straightforward visual interface can be incorporated into pathology training programs. PMID:26284156
Krasowski, Matthew D; Schriever, Andy; Mathur, Gagan; Blau, John L; Stauffer, Stephanie L; Ford, Bradley A
2015-01-01
Pathology data contained within the electronic health record (EHR), and laboratory information system (LIS) of hospitals represents a potentially powerful resource to improve clinical care. However, existing reporting tools within commercial EHR and LIS software may not be able to efficiently and rapidly mine data for quality improvement and research applications. We present experience using a data warehouse produced collaboratively between an academic medical center and a private company. The data warehouse contains data from the EHR, LIS, admission/discharge/transfer system, and billing records and can be accessed using a self-service data access tool known as Starmaker. The Starmaker software allows users to use complex Boolean logic, include and exclude rules, unit conversion and reference scaling, and value aggregation using a straightforward visual interface. More complex queries can be achieved by users with experience with Structured Query Language. Queries can use biomedical ontologies such as Logical Observation Identifiers Names and Codes and Systematized Nomenclature of Medicine. We present examples of successful searches using Starmaker, falling mostly in the realm of microbiology and clinical chemistry/toxicology. The searches were ones that were either very difficult or basically infeasible using reporting tools within the EHR and LIS used in the medical center. One of the main strengths of Starmaker searches is rapid results, with typical searches covering 5 years taking only 1-2 min. A "Run Count" feature quickly outputs the number of cases meeting criteria, allowing for refinement of searches before downloading patient-identifiable data. The Starmaker tool is available to pathology residents and fellows, with some using this tool for quality improvement and scholarly projects. A data warehouse has significant potential for improving utilization of clinical pathology testing. Software that can access data warehouse using a straightforward visual interface can be incorporated into pathology training programs.
A Customizable Dashboarding System for Watershed Model Interpretation
NASA Astrophysics Data System (ADS)
Easton, Z. M.; Collick, A.; Wagena, M. B.; Sommerlot, A.; Fuka, D.
2017-12-01
Stakeholders, including policymakers, agricultural water managers, and small farm managers, can benefit from the outputs of commonly run watershed models. However, the information that each stakeholder needs is be different. While policy makers are often interested in the broader effects that small farm management may have on a watershed during extreme events or over long periods, farmers are often interested in field specific effects at daily or seasonal period. To provide stakeholders with the ability to analyze and interpret data from large scale watershed models, we have developed a framework that can support custom exploration of the large datasets produced. For the volume of data produced by these models, SQL-based data queries are not efficient; thus, we employ a "Not Only SQL" (NO-SQL) query language, which allows data to scale in both quantity and query volumes. We demonstrate a stakeholder customizable Dashboarding system that allows stakeholders to create custom `dashboards' to summarize model output specific to their needs. Dashboarding is a dynamic and purpose-based visual interface needed to display one-to-many database linkages so that the information can be presented for a single time period or dynamically monitored over time and allows a user to quickly define focus areas of interest for their analysis. We utilize a single watershed model that is run four times daily with a combined set of climate projections, which are then indexed, and added to an ElasticSearch datastore. ElasticSearch is a NO-SQL search engine built on top of Apache Lucene, a free and open-source information retrieval software library. Aligned with the ElasticSearch project is the open source visualization and analysis system, Kibana, which we utilize for custom stakeholder dashboarding. The dashboards create a visualization of the stakeholder selected analysis and can be extended to recommend robust strategies to support decision-making.
NASA Technology Takes Center Stage
NASA Technical Reports Server (NTRS)
2004-01-01
In today's fast-paced business world, there is often more information available to researchers than there is time to search through it. Data mining has become the answer to finding the proverbial "needle in a haystack," as companies must be able to quickly locate specific pieces of information from large collections of data. Perilog, a suite of data-mining tools, searches for hidden patterns in large databases to determine previously unrecognized relationships. By retrieving and organizing contextually relevant data from any sequence of terms - from genetic data to musical notes - the software can intelligently compile information about desired topics from databases.
PubMed-EX: a web browser extension to enhance PubMed search with text mining features.
Tsai, Richard Tzong-Han; Dai, Hong-Jie; Lai, Po-Ting; Huang, Chi-Hsin
2009-11-15
PubMed-EX is a browser extension that marks up PubMed search results with additional text-mining information. PubMed-EX's page mark-up, which includes section categorization and gene/disease and relation mark-up, can help researchers to quickly focus on key terms and provide additional information on them. All text processing is performed server-side, freeing up user resources. PubMed-EX is freely available at http://bws.iis.sinica.edu.tw/PubMed-EX and http://iisr.cse.yzu.edu.tw:8000/PubMed-EX/.
genenames.org: the HGNC resources in 2011
Seal, Ruth L.; Gordon, Susan M.; Lush, Michael J.; Wright, Mathew W.; Bruford, Elspeth A.
2011-01-01
The HUGO Gene Nomenclature Committee (HGNC) aims to assign a unique gene symbol and name to every human gene. The HGNC database currently contains almost 30 000 approved gene symbols, over 19 000 of which represent protein-coding genes. The public website, www.genenames.org, displays all approved nomenclature within Symbol Reports that contain data curated by HGNC editors and links to related genomic, phenotypic and proteomic information. Here we describe improvements to our resources, including a new Quick Gene Search, a new List Search, an integrated HGNC BioMart and a new Statistics and Downloads facility. PMID:20929869
Legal Medicine Information System using CDISC ODM.
Kiuchi, Takahiro; Yoshida, Ken-ichi; Kotani, Hirokazu; Tamaki, Keiji; Nagai, Hisashi; Harada, Kazuki; Ishikawa, Hirono
2013-11-01
We have developed a new database system for forensic autopsies, called the Legal Medicine Information System, using the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM). This system comprises two subsystems, namely the Institutional Database System (IDS) located in each institute and containing personal information, and the Central Anonymous Database System (CADS) located in the University Hospital Medical Information Network Center containing only anonymous information. CDISC ODM is used as the data transfer protocol between the two subsystems. Using the IDS, forensic pathologists and other staff can register and search for institutional autopsy information, print death certificates, and extract data for statistical analysis. They can also submit anonymous autopsy information to the CADS semi-automatically. This reduces the burden of double data entry, the time-lag of central data collection, and anxiety regarding legal and ethical issues. Using the CADS, various studies on the causes of death can be conducted quickly and easily, and the results can be used to prevent similar accidents, diseases, and abuse. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
RecA: Regulation and Mechanism of a Molecular Search Engine.
Bell, Jason C; Kowalczykowski, Stephen C
2016-06-01
Homologous recombination maintains genomic integrity by repairing broken chromosomes. The broken chromosome is partially resected to produce single-stranded DNA (ssDNA) that is used to search for homologous double-stranded DNA (dsDNA). This homology driven 'search and rescue' is catalyzed by a class of DNA strand exchange proteins that are defined in relation to Escherichia coli RecA, which forms a filament on ssDNA. Here, we review the regulation of RecA filament assembly and the mechanism by which RecA quickly and efficiently searches for and identifies a unique homologous sequence among a vast excess of heterologous DNA. Given that RecA is the prototypic DNA strand exchange protein, its behavior affords insight into the actions of eukaryotic RAD51 orthologs and their regulators, BRCA2 and other tumor suppressors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Single-agent parallel window search
NASA Technical Reports Server (NTRS)
Powley, Curt; Korf, Richard E.
1991-01-01
Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.
The NOAA OneStop System: From Well-Curated Metadata to Data Discovery
NASA Astrophysics Data System (ADS)
McQuinn, E.; Jakositz, A.; Caldwell, A.; Delk, Z.; Neufeld, D.; Shapiro, J.; Partee, R.; Milan, A.
2017-12-01
The NOAA OneStop project is a pathfinder in the realm of enabling users to search for, discover, and access NOAA data. As the project continues along its path to maturity, it has become evident that three areas are of utmost importance to its success in the Earth science community: ensuring quality metadata, building a robust and scalable backend architecture, and keeping the user interface simple to use. Why is this the case? Because, simply put, we are dealing with all aspects of a Big Data problem: large volumes of disparate data needing to be quickly and easily processed and retrieved. In this presentation we discuss the three key aspects of OneStop architecture and how development in each area must be done through cross-team collaboration in order to succeed. We cover aspects of the web-based user interface and OneStop API and how metadata curators and software engineers have worked together to continually iterate on an ever-improving data discovery tool meant to be used by a variety of users searching across a broad assortment of data types.
Emerging modalities in dysphagia rehabilitation: neuromuscular electrical stimulation.
Huckabee, Maggie-Lee; Doeltgen, Sebastian
2007-10-12
The aim of this review article is to advise the New Zealand medical community about the application of neuromuscular electrical stimulation (NMES) as a treatment for pharyngeal swallowing impairment (dysphagia). NMES in this field of rehabilitation medicine has quickly emerged as a widely used method overseas but has been accompanied by significant controversy. Basic information is provided about the physiologic background of electrical stimulation. The literature reviewed in this manuscript was derived through a computer-assisted search using the biomedical database Medline to identify all relevant articles published until from the initiation of the databases up to January 2007. The reviewers used the following search strategy: [(deglutition disorders OR dysphagia) AND (neuromuscular electrical stimulation OR NMES)]. In addition, the technique of reference tracing was used and very recently published studies known to the authors but not yet included in the database systems were included. This review elucidates not only the substantive potential benefit of this treatment, but also potential key concerns for patient safety and long term outcome. The discussion within the clinical and research communities, especially around the commercially available VitalStim stimulator, is objectively explained.
A Search Engine That's Aware of Your Needs
NASA Technical Reports Server (NTRS)
2005-01-01
Internet research can be compared to trying to drink from a firehose. Such a wealth of information is available that even the simplest inquiry can sometimes generate tens of thousands of leads, more information than most people can handle, and more burdensome than most can endure. Like everyone else, NASA scientists rely on the Internet as a primary search tool. Unlike the average user, though, NASA scientists perform some pretty sophisticated, involved research. To help manage the Internet and to allow researchers at NASA to gain better, more efficient access to the wealth of information, the Agency needed a search tool that was more refined and intelligent than the typical search engine. Partnership NASA funded Stottler Henke, Inc., of San Mateo, California, a cutting-edge software company, with a Small Business Innovation Research (SBIR) contract to develop the Aware software for searching through the vast stores of knowledge quickly and efficiently. The partnership was through NASA s Ames Research Center.
TokSearch: A search engine for fusion experimental data
Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.; ...
2018-04-01
At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less
TokSearch: A search engine for fusion experimental data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.
At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less
In Search of a Better Bean: A Simple Activity to Introduce Plant Biology
ERIC Educational Resources Information Center
Spaccarotella, Kim; James, Roxie
2014-01-01
Measuring plant stem growth over time is a simple activity commonly used to introduce concepts in growth and development in plant biology (Reid & Pu, 2007). This Quick Fix updates the activity and incorporates a real-world application: students consider possible effects of soil substrate and sunlight conditions on plant growth without needing…
USDA-ARS?s Scientific Manuscript database
Discovery of X. fastidiosa from olive trees with “Olive quick decline syndrome" (OQDS) in October 2013 on the western coast of the Salento Peninsula prompted an immediate search for insect vectors of the bacterium. The dominant xylem-fluid feeding hemipteran collected in olive orchards was the meado...
Code of Federal Regulations, 2010 CFR
2010-10-01
... ASSIST websites: (1) ASSIST (http://assist.daps.dla.mil); (2) Quick Search (http://assist.daps.dla.mil/quicksearch); (3) ASSISTdocs.com (http://assistdocs.com). (b) Documents not available from ASSIST may be... Wizard (http://assist.daps.dla.mil/wizard); (2) Phoning the DoDSSP Customer Service Desk (215) 697-2179...
ERIC Educational Resources Information Center
Choi, Y. Joon; An, Soonok
2016-01-01
Objective: The purpose of the study is to systematically review the available evidence on the effectiveness of interventions to improve the response of various helping professionals who come into contact with female victims of intimate partner violence (IPV). Methods: Several databases were searched, and N = 38 studies met the inclusion criteria…
ERIC Educational Resources Information Center
Irving, Holly Berry
The materials cited in this annotated bibliography focus on maternal and infant health and the critical importance of good nutrition. Audiovisuals and books are listed in 152 citations derived from online searches of the AGRICOLA database. Materials are available from the National Agricultural Library or through interlibrary loan to a local…
Food Safety and Sanitation Audiovisuals. January 1979-December 1988. Quick Bibliography Series.
ERIC Educational Resources Information Center
Updegrove, Natalie
The citations in this annotated bibliography focus on hygiene and sanitation in the preparation of food and standards for food service to the public. Materials cited can be obtained through interlibrary loan through a local library or directly from the National Agricultural Library. The bibliography was derived from online searches of the AGRICOLA…
ERIC Educational Resources Information Center
Leonard, Scott A., Comp.; Dobert, Raymond, Comp.
This bibliography on the commercialization and economic aspects of biotechnology was produced by the National Agricultural Library. It contains 151 citations in English from the AGRICOLA database. The search strategy is included, call numbers are given for each entry, and abstracts are provided for some citations. The bibliography concludes with…
New Teachers Search for Place in New Orleans
ERIC Educational Resources Information Center
Zubrzycki, Jaclyn
2013-01-01
Derek Roguski and Hannah Sadtler came to New Orleans in 2008 through Teach For America. The competitive program provided five weeks' training and helped place them in schools, and both young teachers were eager to learn to teach and help the city's students. But they quickly found that they had more questions than answers about the schools they…
Code of Federal Regulations, 2011 CFR
2011-10-01
... ASSIST websites: (1) ASSIST (http://assist.daps.dla.mil); (2) Quick Search (http://assist.daps.dla.mil/quicksearch); (3) ASSISTdocs.com (http://assistdocs.com). (b) Documents not available from ASSIST may be... Wizard (http://assist.daps.dla.mil/wizard); (2) Phoning the DoDSSP Customer Service Desk (215) 697-2179...
Code of Federal Regulations, 2014 CFR
2014-10-01
... ASSIST websites: (1) ASSIST (https://assist.dla.mil/online/start/; (2) Quick Search (http://quicksearch.dla.mil/; (3) ASSISTdocs.com (http://assistdocs.com). (b) Documents not available from ASSIST may be... Wizard (https://assist.dla.mil/wizard/index.cfm); (2) Phoning the DoDSSP Customer Service Desk (215) 697...
Code of Federal Regulations, 2013 CFR
2013-10-01
... ASSIST websites: (1) ASSIST (http://assist.daps.dla.mil); (2) Quick Search (http://assist.daps.dla.mil/quicksearch); (3) ASSISTdocs.com (http://assistdocs.com). (b) Documents not available from ASSIST may be... Wizard (http://assist.daps.dla.mil/wizard); (2) Phoning the DoDSSP Customer Service Desk (215) 697-2179...
Code of Federal Regulations, 2012 CFR
2012-10-01
... ASSIST websites: (1) ASSIST (http://assist.daps.dla.mil); (2) Quick Search (http://assist.daps.dla.mil/quicksearch); (3) ASSISTdocs.com (http://assistdocs.com). (b) Documents not available from ASSIST may be... Wizard (http://assist.daps.dla.mil/wizard); (2) Phoning the DoDSSP Customer Service Desk (215) 697-2179...
about this project and web site. your e-mail address Sign Me Up Search: OK Button DUF6 Guide DU Uses DUF6 Management and Uses DUF6 Conversion EIS Documents News FAQs Internet Resources Glossary Home  -mail icon E-mail to a friend DUF6 Guide | DU Uses | DUF6 Management | DUF6 Conversion Facility EISs
Reading the World through Very Large Numbers
ERIC Educational Resources Information Center
Greer, Brian; Mukhopadhyay, Swapna
2010-01-01
One original, and continuing, source of interest in large numbers is observation of the natural world, such as trying to count the stars on a clear night or contemplation of the number of grains of sand on the seashore. Indeed, a search of the internet quickly reveals many discussions of the relative numbers of stars and grains of sand. Big…
Quick probabilistic binary image matching: changing the rules of the game
NASA Astrophysics Data System (ADS)
Mustafa, Adnan A. Y.
2016-09-01
A Probabilistic Matching Model for Binary Images (PMMBI) is presented that predicts the probability of matching binary images with any level of similarity. The model relates the number of mappings, the amount of similarity between the images and the detection confidence. We show the advantage of using a probabilistic approach to matching in similarity space as opposed to a linear search in size space. With PMMBI a complete model is available to predict the quick detection of dissimilar binary images. Furthermore, the similarity between the images can be measured to a good degree if the images are highly similar. PMMBI shows that only a few pixels need to be compared to detect dissimilarity between images, as low as two pixels in some cases. PMMBI is image size invariant; images of any size can be matched at the same quick speed. Near-duplicate images can also be detected without much difficulty. We present tests on real images that show the prediction accuracy of the model.
Large scale track analysis for wide area motion imagery surveillance
NASA Astrophysics Data System (ADS)
van Leeuwen, C. J.; van Huis, J. R.; Baan, J.
2016-10-01
Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their behavior.
Mining the Kilo-Degree Survey for solar system objects
NASA Astrophysics Data System (ADS)
Mahlke, M.; Bouy, H.; Altieri, B.; Verdoes Kleijn, G.; Carry, B.; Bertin, E.; de Jong, J. T. A.; Kuijken, K.; McFarland, J.; Valentijn, E.
2018-02-01
Context. The search for minor bodies in the solar system promises insights into its formation history. Wide imaging surveys offer the opportunity to serendipitously discover and identify these traces of planetary formation and evolution. Aim. We aim to present a method to acquire position, photometry, and proper motion measurements of solar system objects (SSOs) in surveys using dithered image sequences. The application of this method on the Kilo-Degree Survey (KiDS) is demonstrated. Methods: Optical images of 346 deg2 fields of the sky are searched in up to four filters using the AstrOmatic software suite to reduce the pixel to catalog data. The SSOs within the acquired sources are selected based on a set of criteria depending on their number of observation, motion, and size. The Virtual Observatory SkyBoT tool is used to identify known objects. Results: We observed 20 221 SSO candidates, with an estimated false-positive content of less than 0.05%. Of these SSO candidates, 53.4% are identified by SkyBoT. KiDS can detect previously unknown SSOs because of its depth and coverage at high ecliptic latitude, including parts of the Southern Hemisphere. Thus we expect the large fraction of the 46.6% of unidentified objects to be truly new SSOs. Conclusions: Our method is applicable to a variety of dithered surveys such as DES, LSST, and Euclid. It offers a quick and easy-to-implement search for SSOs. SkyBoT can then be used to estimate the completeness of the recovered sample. The tables of raw data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/610/A21
A rational approach to heavy-atom derivative screening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joyce, M. Gordon; Radaev, Sergei; Sun, Peter D., E-mail: psun@nih.gov
2010-04-01
In order to overcome the difficulties associated with the ‘classical’ heavy-atom derivatization procedure, an attempt has been made to develop a rational crystal-free heavy-atom-derivative screening method and a quick-soak derivatization procedure which allows heavy-atom compound identification. Despite the development in recent times of a range of techniques for phasing macromolecules, the conventional heavy-atom derivatization method still plays a significant role in protein structure determination. However, this method has become less popular in modern high-throughput oriented crystallography, mostly owing to its trial-and-error nature, which often results in lengthy empirical searches requiring large numbers of well diffracting crystals. In addition, the phasingmore » power of heavy-atom derivatives is often compromised by lack of isomorphism or even loss of diffraction. In order to overcome the difficulties associated with the ‘classical’ heavy-atom derivatization procedure, an attempt has been made to develop a rational crystal-free heavy-atom derivative-screening method and a quick-soak derivatization procedure which allows heavy-atom compound identification. The method includes three basic steps: (i) the selection of likely reactive compounds for a given protein and specific crystallization conditions based on pre-defined heavy-atom compound reactivity profiles, (ii) screening of the chosen heavy-atom compounds for their ability to form protein adducts using mass spectrometry and (iii) derivatization of crystals with selected heavy-metal compounds using the quick-soak method to maximize diffraction quality and minimize non-isomorphism. Overall, this system streamlines the process of heavy-atom compound identification and minimizes the problem of non-isomorphism in phasing.« less
A method for detecting and characterizing outbreaks of infectious disease from clinical reports.
Cooper, Gregory F; Villamarin, Ricardo; Rich Tsui, Fu-Chiang; Millett, Nicholas; Espino, Jeremy U; Wagner, Michael M
2015-02-01
Outbreaks of infectious disease can pose a significant threat to human health. Thus, detecting and characterizing outbreaks quickly and accurately remains an important problem. This paper describes a Bayesian framework that links clinical diagnosis of individuals in a population to epidemiological modeling of disease outbreaks in the population. Computer-based diagnosis of individuals who seek healthcare is used to guide the search for epidemiological models of population disease that explain the pattern of diagnoses well. We applied this framework to develop a system that detects influenza outbreaks from emergency department (ED) reports. The system diagnoses influenza in individuals probabilistically from evidence in ED reports that are extracted using natural language processing. These diagnoses guide the search for epidemiological models of influenza that explain the pattern of diagnoses well. Those epidemiological models with a high posterior probability determine the most likely outbreaks of specific diseases; the models are also used to characterize properties of an outbreak, such as its expected peak day and estimated size. We evaluated the method using both simulated data and data from a real influenza outbreak. The results provide support that the approach can detect and characterize outbreaks early and well enough to be valuable. We describe several extensions to the approach that appear promising. Copyright © 2014 Elsevier Inc. All rights reserved.
Lancaster, Owen; Beck, Tim; Atlan, David; Swertz, Morris; Thangavelu, Dhiwagaran; Veal, Colin; Dalgleish, Raymond; Brookes, Anthony J
2015-10-01
Biomedical data sharing is desirable, but problematic. Data "discovery" approaches-which establish the existence rather than the substance of data-precisely connect data owners with data seekers, and thereby promote data sharing. Cafe Variome (http://www.cafevariome.org) was therefore designed to provide a general-purpose, Web-based, data discovery tool that can be quickly installed by any genotype-phenotype data owner, or network of data owners, to make safe or sensitive content appropriately discoverable. Data fields or content of any type can be accommodated, from simple ID and label fields through to extensive genotype and phenotype details based on ontologies. The system provides a "shop window" in front of data, with main interfaces being a simple search box and a powerful "query-builder" that enable very elaborate queries to be formulated. After a successful search, counts of records are reported grouped by "openAccess" (data may be directly accessed), "linkedAccess" (a source link is provided), and "restrictedAccess" (facilitated data requests and subsequent provision of approved records). An administrator interface provides a wide range of options for system configuration, enabling highly customized single-site or federated networks to be established. Current uses include rare disease data discovery, patient matchmaking, and a Beacon Web service. © 2015 WILEY PERIODICALS, INC.
A Method for Detecting and Characterizing Outbreaks of Infectious Disease from Clinical Reports
Cooper, Gregory F.; Villamarin, Ricardo; Tsui, Fu-Chiang (Rich); Millett, Nicholas; Espino, Jeremy U.; Wagner, Michael M.
2014-01-01
Outbreaks of infectious disease can pose a significant threat to human health. Thus, detecting and characterizing outbreaks quickly and accurately remains an important problem. This paper describes a Bayesian framework that links clinical diagnosis of individuals in a population to epidemiological modeling of disease outbreaks in the population. Computer-based diagnosis of individuals who seek healthcare is used to guide the search for epidemiological models of population disease that explain the pattern of diagnoses well. We applied this framework to develop a system that detects influenza outbreaks from emergency department (ED) reports. The system diagnoses influenza in individuals probabilistically from evidence in ED reports that are extracted using natural language processing. These diagnoses guide the search for epidemiological models of influenza that explain the pattern of diagnoses well. Those epidemiological models with a high posterior probability determine the most likely outbreaks of specific diseases; the models are also used to characterize properties of an outbreak, such as its expected peak day and estimated size. We evaluated the method using both simulated data and data from a real influenza outbreak. The results provide support that the approach can detect and characterize outbreaks early and well enough to be valuable. We describe several extensions to the approach that appear promising. PMID:25181466
Shoman, Haitham; Karafillakis, Emilie; Rawaf, Salman
2017-01-04
An Ebola outbreak started in December 2013 in Guinea and spread to Liberia and Sierra Leone in 2014. The health systems in place in the three countries lacked the infrastructure and the preparation to respond to the outbreak quickly and the World Health Organisation (WHO) declared a public health emergency of international concern on August 8 2014. The aim of this study was to determine the effects of health systems' organisation and performance on the West African Ebola outbreak in Guinea, Liberia and Sierra Leone and lessons learned. The WHO health system building blocks were used to evaluate the performance of the health systems in these countries. A systematic review of articles published from inception until July 2015 was conducted following the PRISMA guidelines. Electronic databases including Medline, Embase, Global Health, and the Cochrane library were searched for relevant literature. Grey literature was also searched through Google Scholar and Scopus. Articles were exported and selected based on a set of inclusion and exclusion criteria. Data was then extracted into a spreadsheet and a descriptive analysis was performed. Each study was critically appraised using the Crowe Critical Appraisal Tool. The review was supplemented with expert interviews where participants were identified from reference lists and using the snowball method. Thirteen articles were included in the study and six experts from different organisations were interviewed. Findings were analysed based on the WHO health system building blocks. Shortage of health workforce had an important effect on the control of Ebola but also suffered the most from the outbreak. This was followed by information and research, medical products and technologies, health financing and leadership and governance. Poor surveillance and lack of proper communication also contributed to the outbreak. Lack of available funds jeopardised payments and purchase of essential resources and medicines. Leadership and governance had least findings but an overarching consensus that they would have helped prompt response, adequate coordination and management of resources. Ensuring an adequate and efficient health workforce is of the utmost importance to ensure a strong health system and a quick response to new outbreaks. Adequate service delivery results from a collective success of the other blocks. Health financing and its management is crucial to ensure availability of medical products, fund payments to staff and purchase necessary equipment. However, leadership and governance needs to be rigorously explored on their main defects to control the outbreak.
An image database management system for conducting CAD research
NASA Astrophysics Data System (ADS)
Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.
2007-03-01
The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.
The use of 3D surface scanning for the measurement and assessment of the human foot
2010-01-01
Background A number of surface scanning systems with the ability to quickly and easily obtain 3D digital representations of the foot are now commercially available. This review aims to present a summary of the reported use of these technologies in footwear development, the design of customised orthotics, and investigations for other ergonomic purposes related to the foot. Methods The PubMed and ScienceDirect databases were searched. Reference lists and experts in the field were also consulted to identify additional articles. Studies in English which had 3D surface scanning of the foot as an integral element of their protocol were included in the review. Results Thirty-eight articles meeting the search criteria were included. Advantages and disadvantages of using 3D surface scanning systems are highlighted. A meta-analysis of studies using scanners to investigate the changes in foot dimensions during varying levels of weight bearing was carried out. Conclusions Modern 3D surface scanning systems can obtain accurate and repeatable digital representations of the foot shape and have been successfully used in medical, ergonomic and footwear development applications. The increasing affordability of these systems presents opportunities for researchers investigating the foot and for manufacturers of foot related apparel and devices, particularly those interested in producing items that are customised to the individual. Suggestions are made for future areas of research and for the standardization of the protocols used to produce foot scans. PMID:20815914
Identification of an interstellar oxide grain from the Murchison meteorite by ion imaging
NASA Technical Reports Server (NTRS)
Nittler, L. R.; Walker, R. M.; Zinner, E.; Hoppe, P.; Lewis, R. S.
1993-01-01
We report here the first use of a new ion-imaging system to locate a rare interstellar aluminum oxide grain in a Murchison acid residue. While several types of carbon-rich interstellar grains, including graphite, diamond, SiC, and TiC, have previously been found, isotopically anomalous interstellar oxide grains have proven more elusive. We have developed an ion imaging system which allows us to map the isotopic composition of large numbers of grains relatively quickly and is, thus, ideally suited to search for isotopically exotic subsets of grains. The system consists of a PHOTOMETRICS CCD camera coupled to the microchannel plate/fluorescent screen of the WU modified CAMECA IMS-3F ion microprobe. Isotopic images of the sample surface are focused on the CCD and digitized. Subsequent image processing identifies individual grains in the images and determines isotopic ratios for each. For the present work, we have imaged in O-16 and O-18; negligible contributions of (17)OH(-) and (16)OH2(-) signals to the O-18 signal allow the use of low mass resolution, simplifying the measurements. Repeated imaging runs on terrestrial corundum particles showed that the system measures isotopic ratios reproducibly to about +/- 40%. Each imaging run took about six minutes to complete, and for this study there were on average 5-15 grains in each image. We have conducted imaging searches in 2-4 micron size separates of both Orgueil and Murchison.
Image-guided decision support system for pulmonary nodule classification in 3D thoracic CT images
NASA Astrophysics Data System (ADS)
Kawata, Yoshiki; Niki, Noboru; Ohmatsu, Hironobu; Kusumoto, Masahiro; Kakinuma, Ryutaro; Mori, Kiyoshi; Yamada, Kozo; Nishiyama, Hiroyuki; Eguchi, Kenji; Kaneko, Masahiro; Moriyama, Noriyuki
2004-05-01
The purpose of this study is to develop an image-guided decision support system that assists decision-making in clinical differential diagnosis of pulmonary nodules. This approach retrieves and displays nodules that exhibit morphological and internal profiles consistent to the nodule in question. It uses a three-dimensional (3-D) CT image database of pulmonary nodules for which diagnosis is known. In order to build the system, there are following issues that should be solved: 1) to categorize the nodule database with respect to morphological and internal features, 2) to quickly search nodule images similar to an indeterminate nodule from a large database, and 3) to reveal malignancy likelihood computed by using similar nodule images. Especially, the first problem influences the design of other issues. The successful categorization of nodule pattern might lead physicians to find important cues that characterize benign and malignant nodules. This paper focuses on an approach to categorize the nodule database with respect to nodule shape and CT density patterns inside nodule.
NASA Astrophysics Data System (ADS)
Griffiths, Bradley Joseph
New supply chain management methods using radio frequency identification (RFID) and global positioning system (GPS) technology are quickly being adopted by companies as various inventory management benefits are being realized. For example, companies such as Nippon Yusen Kaisha (NYK) Logistics use the technology coupled with geospatial support systems for distributors to quickly find and manage freight containers. Traditional supply chain management methods require pen-to-paper reporting, searching inventory on foot, and human data entry. Some companies that prioritize supply chain management have not adopted the new technology, because they may feel that their traditional methods save the company expenses. This thesis serves as a pilot study that examines how information technology (IT) utilizing RFID and GPS technology can serve to increase workplace productivity, decrease human labor associated with inventorying, plus be used for spatial analysis by management. This pilot study represents the first attempt to couple RFID technology with Geographic Information Systems (GIS) in supply chain management efforts to analyze and locate mobile assets by exploring costs and benefits of implementation plus how the technology can be employed. This pilot study identified a candidate to implement a new inventory management method as XYZ Logistics, Inc. XYZ Logistics, Inc. is a fictitious company but represents a factual corporation. The name has been changed to provide the company with anonymity and to not disclose confidential business information. XYZ Logistics, Inc., is a nation-wide company that specializes in providing space solutions for customers including portable offices, storage containers, and customizable buildings.
Electronic Procedures for Medical Operations
NASA Technical Reports Server (NTRS)
2015-01-01
Electronic procedures are replacing text-based documents for recording the steps in performing medical operations aboard the International Space Station. S&K Aerospace, LLC, has developed a content-based electronic system-based on the Extensible Markup Language (XML) standard-that separates text from formatting standards and tags items contained in procedures so they can be recognized by other electronic systems. For example, to change a standard format, electronic procedures are changed in a single batch process, and the entire body of procedures will have the new format. Procedures can be quickly searched to determine which are affected by software and hardware changes. Similarly, procedures are easily shared with other electronic systems. The system also enables real-time data capture and automatic bookmarking of current procedure steps. In Phase II of the project, S&K Aerospace developed a Procedure Representation Language (PRL) and tools to support the creation and maintenance of electronic procedures for medical operations. The goal is to develop these tools in such a way that new advances can be inserted easily, leading to an eventual medical decision support system.
Evaluation of drug interaction microcomputer software: Dambro's Drug Interactions.
Poirier, T I; Giudici, R A
1990-01-01
Dambro's Drug Interactions was evaluated using general and specific criteria. The installation process, ease of learning and use were rated excellent. The user documentation and quality of the technical support were good. The scope of coverage, clinical documentation, frequency of updates, and overall clinical performance were fair. The primary advantages of the program are the quick searching and detection of drug interactions, and the attempt to provide useful interaction data, i.e., significance and reference. The disadvantages are the lack of current drug interaction information, outdated references, lack of evaluative drug interaction information, and the inability to save or print patient profiles. The program is not a good value for the pharmacist but has limited use as a quick screening tool.
K-Nearest Neighbor Algorithm Optimization in Text Categorization
NASA Astrophysics Data System (ADS)
Chen, Shufeng
2018-01-01
K-Nearest Neighbor (KNN) classification algorithm is one of the simplest methods of data mining. It has been widely used in classification, regression and pattern recognition. The traditional KNN method has some shortcomings such as large amount of sample computation and strong dependence on the sample library capacity. In this paper, a method of representative sample optimization based on CURE algorithm is proposed. On the basis of this, presenting a quick algorithm QKNN (Quick k-nearest neighbor) to find the nearest k neighbor samples, which greatly reduces the similarity calculation. The experimental results show that this algorithm can effectively reduce the number of samples and speed up the search for the k nearest neighbor samples to improve the performance of the algorithm.
Evaluation of the Quality of Online Information for Patients with Rare Cancers: Thyroid Cancer.
Kuenzel, Ulrike; Monga Sindeu, Tabea; Schroth, Sarah; Huebner, Jutta; Herth, Natalie
2017-01-24
The Internet offers an easy and quick access to a vast amount of patient information. However, several studies point to the poor quality of many websites and the resulting hazards of false information. The aim of this study was to assess quality of information on thyroid cancer. A patients' search for information about thyroid cancer on German websites was simulated using the search engine Google and the patient portal "Patienten-Information.de". The websites were assessed using a standardized instrument with formal and content aspects from the German Cancer Society. Supporting the results of prior studies that analysed patient information on the Internet, the data showed that the quality of patient information on thyroid cancer is highly heterogeneous depending on the website providers. The majority of website providers are represented by media and health providers other than health insurances, practices and professionals offering patient information of relatively poor quality. Moreover, most websites offer patient information of low-quality content. Only a few trustworthy, high-quality websites exist. Especially Google, a common search engine, focuses more on the dissemination of information than on quality aspects. In order to improve the patient information from the Internet, the visibility of high-quality websites must be improved. For that, education programs to improve patients' eHealth literacy are needed. A quick and easy evaluation tool for online information suited for patients should be implemented, and patients should be taught to integrate such a tool into their research process.
Solenoid hammer valve developed for quick-opening requirements
NASA Technical Reports Server (NTRS)
Wrench, E. H.
1967-01-01
Quick-opening lightweight solenoid hammer valve requires a low amount of electrical energy to open, and closes by the restoring action of the mechanical springs. This design should be applicable to many quick-opening requirements in fluid systems.
ERIC Educational Resources Information Center
Cobb, Susan C.; Baird, Susan B.
1999-01-01
A survey to determine whether oncology nurses (n=670) use the Internet and for what purpose revealed that they use it for drug information, literature searches, academic information, patient education, and continuing education. Results suggest that continuing-education providers should pursue the Internet as a means of meeting the need for quick,…
Stakeholders' Perceptions on Mandated Student Retention in Early Childhood
ERIC Educational Resources Information Center
Mankins, Jennifer K.
2018-01-01
Reading is one of the primary goals of the early elementary grades. When students start to struggle with this complex skill, educators and parents search for solutions to rectify quickly mounting gaps before a child falls too far behind. In the State of Oklahoma, lawmakers have passed a law requiring mandatory 3rd grade retention for students who…
ERIC Educational Resources Information Center
Hamilton, Gayle; Michalopoulos, Charles
2016-01-01
There is a longstanding debate about whether helping welfare recipients quickly find work or helping them to first obtain some basic education and training better improves their economic well-being. This brief contributes to the debate by presenting long-term findings from three sites in the seven-site National Evaluation of Welfare-to-Work…
Automatic Identification of Topic Tags from Texts Based on Expansion-Extraction Approach
ERIC Educational Resources Information Center
Yang, Seungwon
2013-01-01
Identifying topics of a textual document is useful for many purposes. We can organize the documents by topics in digital libraries. Then, we could browse and search for the documents with specific topics. By examining the topics of a document, we can quickly understand what the document is about. To augment the traditional manual way of topic…
NCBI GEO: archive for functional genomics data sets--10 years on.
Barrett, Tanya; Troup, Dennis B; Wilhite, Stephen E; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F; Tomashevsky, Maxim; Marshall, Kimberly A; Phillippy, Katherine H; Sherman, Patti M; Muertter, Rolf N; Holko, Michelle; Ayanbule, Oluwabukunmi; Yefanov, Andrey; Soboleva, Alexandra
2011-01-01
A decade ago, the Gene Expression Omnibus (GEO) database was established at the National Center for Biotechnology Information (NCBI). The original objective of GEO was to serve as a public repository for high-throughput gene expression data generated mostly by microarray technology. However, the research community quickly applied microarrays to non-gene-expression studies, including examination of genome copy number variation and genome-wide profiling of DNA-binding proteins. Because the GEO database was designed with a flexible structure, it was possible to quickly adapt the repository to store these data types. More recently, as the microarray community switches to next-generation sequencing technologies, GEO has again adapted to host these data sets. Today, GEO stores over 20,000 microarray- and sequence-based functional genomics studies, and continues to handle the majority of direct high-throughput data submissions from the research community. Multiple mechanisms are provided to help users effectively search, browse, download and visualize the data at the level of individual genes or entire studies. This paper describes recent database enhancements, including new search and data representation tools, as well as a brief review of how the community uses GEO data. GEO is freely accessible at http://www.ncbi.nlm.nih.gov/geo/.
Shedlock, James; Frisque, Michelle; Hunt, Steve; Walton, Linda; Handler, Jonathan; Gillam, Michael
2010-01-01
Question: How can the user's access to health information, especially full-text articles, be improved? The solution is building and evaluating the Health SmartLibrary (HSL). Setting: The setting is the Galter Health Sciences Library, Feinberg School of Medicine, Northwestern University. Method: The HSL was built on web-based personalization and customization tools: My E-Resources, Stay Current, Quick Search, and File Cabinet. Personalization and customization data were tracked to show user activity with these value-added, online services. Main Results: Registration data indicated that users were receptive to personalized resource selection and that the automated application of specialty-based, personalized HSLs was more frequently adopted than manual customization by users. Those who did customize customized My E-Resources and Stay Current more often than Quick Search and File Cabinet. Most of those who customized did so only once. Conclusion: Users did not always take advantage of the services designed to aid their library research experiences. When personalization is available at registration, users readily accepted it. Customization tools were used less frequently; however, more research is needed to determine why this was the case. PMID:20428276
NASA Astrophysics Data System (ADS)
Jurecka, Miroslawa; Niedzielski, Tomasz
2017-04-01
The objective of the approach presented in this paper is to demonstrate a potential of using the combination of two GIS-based models - mobility model and ring model - for delineating a region above which an Unmanned Aerial Vehicle (UAV) should fly to support the Search and Rescue (SAR) activities. The procedure is based on two concepts, both describing a possible distance/path that lost person could travel from the initial planning point (being either the point last seen, or point last known). The first approach (the ring model) takes into account the crow's flight distance traveled by a lost person and its probability distribution. The second concept (the mobility model) is based on the estimated travel speed and the associated features of the geographical environment of the search area. In contrast to the ring model covering global (hence more general) SAR perspective, the mobility model represents regional viewpoint by taking into consideration local impedance. Both models working together can serve well as a starting point for the UAV flight planning to strengthen the SAR procedures. We present the method of combining the two above-mentioned models in order to delineate UAVs flight region and increase the Probability of Success for future SAR missions. The procedure is a part of a larger Search and Rescue (SAR) system which is being developed at the University of Wrocław, Poland (research project no. IP2014 032773 financed by the Ministry of Science and Higher Education of Poland). The mobility and ring models have been applied to the Polish territory, and they act in concert to provide the UAV operator with the optimal search region. This is attained in real time so that the UAV-based SAR mission can be initiated quickly.
Lowes, Lori E; Goodale, David; Xia, Ying; Postenka, Carl; Piaseczny, Matthew M; Paczkowski, Freeman; Allan, Alison L
2016-11-15
Metastasis is the cause of most prostate cancer (PCa) deaths and has been associated with circulating tumor cells (CTCs). The presence of ≥5 CTCs/7.5mL of blood is a poor prognosis indicator in metastatic PCa when assessed by the CellSearch® system, the "gold standard" clinical platform. However, ~35% of metastatic PCa patients assessed by CellSearch® have undetectable CTCs. We hypothesize that this is due to epithelial-to-mesenchymal transition (EMT) and subsequent loss of necessary CTC detection markers, with important implications for PCa metastasis. Two pre-clinical assays were developed to assess human CTCs in xenograft models; one comparable to CellSearch® (EpCAM-based) and one detecting CTCs semi-independent of EMT status via combined staining with EpCAM/HLA (human leukocyte antigen). In vivo differences in CTC generation, kinetics, metastasis and EMT status were determined using 4 PCa models with progressive epithelial (LNCaP, LNCaP-C42B) to mesenchymal (PC-3, PC-3M) phenotypes. Assay validation demonstrated that the CellSearch®-based assay failed to detect a significant number (~40-50%) of mesenchymal CTCs. In vivo, PCa with an increasingly mesenchymal phenotype shed greater numbers of CTCs more quickly and with greater metastatic capacity than PCa with an epithelial phenotype. Notably, the CellSearch®-based assay captured the majority of CTCs shed during early-stage disease in vivo, and only after establishment of metastases were a significant number of undetectable CTCs present. This study provides important insight into the influence of EMT on CTC generation and subsequent metastasis, and highlights that novel technologies aimed at capturing mesenchymal CTCs may only be useful in the setting of advanced metastatic disease.
QUICK - An interactive software environment for engineering design
NASA Technical Reports Server (NTRS)
Skinner, David L.
1989-01-01
QUICK, an interactive software environment for engineering design, provides a programmable FORTRAN-like calculator interface to a wide range of data structures as well as both built-in and user created functions. QUICK also provides direct access to the operating systems of eight different machine architectures. The evolution of QUICK and a brief overview of the current version are presented.
Keegan, Ronan M; Bibby, Jaclyn; Thomas, Jens; Xu, Dong; Zhang, Yang; Mayans, Olga; Winn, Martyn D; Rigden, Daniel J
2015-02-01
AMPLE clusters and truncates ab initio protein structure predictions, producing search models for molecular replacement. Here, an interesting degree of complementarity is shown between targets solved using the different ab initio modelling programs QUARK and ROSETTA. Search models derived from either program collectively solve almost all of the all-helical targets in the test set. Initial solutions produced by Phaser after only 5 min perform surprisingly well, improving the prospects for in situ structure solution by AMPLE during synchrotron visits. Taken together, the results show the potential for AMPLE to run more quickly and successfully solve more targets than previously suspected.
Farokhzadian, Jamileh; Khajouei, Reza; Ahmadian, Leila
2015-08-01
With the explosion of medical information, and emergence of evidence-based practice (EBP) in healthcare system, searching, retrieving and selecting information for clinical decision-making are becoming required skills for nurses. The aims of this study were to examine the use of different medical information resources by nurses and their information searching and retrieving skills in the context of EBP. A descriptive, cross-sectional study was conducted in four teaching hospitals in Iran. Data were collected from 182 nurses using a questionnaire in 2014. The nurses indicated that they use more human and printed resources than electronic resources to seek information (mean=2.83, SD=1.5; mean=2.77, SD=1.07; and mean=2.13, SD=0.88, respectively). To search online resources, the nurses use quick/basic search features more frequently (mean=2.45, SD=1.15) than other search features such as advanced search, index browsing and MeSH term searching. (1.74≤mean≤2.30, SD=1.01). At least 80% of the nurses were not aware of the purpose or function of search operators such as Boolean and proximity operators. In response to the question measuring skills of the nurses in developing an effective search statement by using Boolean operators, only 20% of them selected the more appropriate statement, using some synonyms of the concepts in a given subject. The study showed that the information seeking and retrieval skills of the nurses were poor and there were clear deficits in the use of updated information resources. To compensate their EBP incompetency, nurses may resort to human resources. In order to use the latest up to date evidence independently, nurses need to improve their information literacy. To reach this goal, clinical librarians, health information specialists, nursing faculties, and clinical nurse educators and mentors can play key roles by providing educational programs. Providing access to online resources in clinical wards can also encourage nurses to learn and use these resources. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
14 CFR 25.1329 - Flight guidance system.
Code of Federal Regulations, 2014 CFR
2014-01-01
... (or equivalent). The autothrust quick disengagement controls must be located on the thrust control... wheel (or equivalent) and thrust control levers. (b) The effects of a failure of the system to disengage... guidance system. (a) Quick disengagement controls for the autopilot and autothrust functions must be...
14 CFR 25.1329 - Flight guidance system.
Code of Federal Regulations, 2012 CFR
2012-01-01
... (or equivalent). The autothrust quick disengagement controls must be located on the thrust control... wheel (or equivalent) and thrust control levers. (b) The effects of a failure of the system to disengage... guidance system. (a) Quick disengagement controls for the autopilot and autothrust functions must be...
14 CFR 25.1329 - Flight guidance system.
Code of Federal Regulations, 2011 CFR
2011-01-01
... (or equivalent). The autothrust quick disengagement controls must be located on the thrust control... wheel (or equivalent) and thrust control levers. (b) The effects of a failure of the system to disengage... guidance system. (a) Quick disengagement controls for the autopilot and autothrust functions must be...
14 CFR 25.1329 - Flight guidance system.
Code of Federal Regulations, 2013 CFR
2013-01-01
... (or equivalent). The autothrust quick disengagement controls must be located on the thrust control... wheel (or equivalent) and thrust control levers. (b) The effects of a failure of the system to disengage... guidance system. (a) Quick disengagement controls for the autopilot and autothrust functions must be...
14 CFR 25.1329 - Flight guidance system.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (or equivalent). The autothrust quick disengagement controls must be located on the thrust control... wheel (or equivalent) and thrust control levers. (b) The effects of a failure of the system to disengage... guidance system. (a) Quick disengagement controls for the autopilot and autothrust functions must be...
Quick fuzzy backpropagation algorithm.
Nikov, A; Stoeva, S
2001-03-01
A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.
NASA Technical Reports Server (NTRS)
Pfister, Robin; McMahon, Joe
2006-01-01
Power User Interface 5.0 (PUI) is a system of middleware, written for expert users in the Earth-science community, PUI enables expedited ordering of data granules on the basis of specific granule-identifying information that the users already know or can assemble. PUI also enables expert users to perform quick searches for orderablegranule information for use in preparing orders. PUI 5.0 is available in two versions (note: PUI 6.0 has command-line mode only): a Web-based application program and a UNIX command-line- mode client program. Both versions include modules that perform data-granule-ordering functions in conjunction with external systems. The Web-based version works with Earth Observing System Clearing House (ECHO) metadata catalog and order-entry services and with an open-source order-service broker server component, called the Mercury Shopping Cart, that is provided separately by Oak Ridge National Laboratory through the Department of Energy. The command-line version works with the ECHO metadata and order-entry process service. Both versions of PUI ultimately use ECHO to process an order to be sent to a data provider. Ordered data are provided through means outside the PUI software system.
Kuhn, Stefan; Schlörer, Nils E
2015-08-01
nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.
Précis of Simple heuristics that make us smart.
Todd, P M; Gigerenzer, G
2000-10-01
How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart (Gigerenzer et al. 1999), we explore fast and frugal heuristics--simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this précis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform comparably to more complex algorithms, particularly when generalizing to new data--that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program.
Phaser.MRage: automated molecular replacement
Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J.; Oeffner, Robert D.; Adams, Paul D.; Read, Randy J.
2013-01-01
Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement. PMID:24189240
orthoFind Facilitates the Discovery of Homologous and Orthologous Proteins.
Mier, Pablo; Andrade-Navarro, Miguel A; Pérez-Pulido, Antonio J
2015-01-01
Finding homologous and orthologous protein sequences is often the first step in evolutionary studies, annotation projects, and experiments of functional complementation. Despite all currently available computational tools, there is a requirement for easy-to-use tools that provide functional information. Here, a new web application called orthoFind is presented, which allows a quick search for homologous and orthologous proteins given one or more query sequences, allowing a recurrent and exhaustive search against reference proteomes, and being able to include user databases. It addresses the protein multidomain problem, searching for homologs with the same domain architecture, and gives a simple functional analysis of the results to help in the annotation process. orthoFind is easy to use and has been proven to provide accurate results with different datasets. Availability: http://www.bioinfocabd.upo.es/orthofind/.
Phaser.MRage: automated molecular replacement.
Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J; Oeffner, Robert D; Adams, Paul D; Read, Randy J
2013-11-01
Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement.
NASA Astrophysics Data System (ADS)
Zhuang, Yufei; Huang, Haibin
2014-02-01
A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.
Masic, Izet; Milinovic, Katarina
2012-01-01
Most of medical journals now has it’s electronic version, available over public networks. Although there are parallel printed and electronic versions, and one other form need not to be simultaneously published. Electronic version of a journal can be published a few weeks before the printed form and must not has identical content. Electronic form of a journals may have an extension that does not contain a printed form, such as animation, 3D display, etc., or may have available fulltext, mostly in PDF or XML format, or just the contents or a summary. Access to a full text is usually not free and can be achieved only if the institution (library or host) enters into an agreement on access. Many medical journals, however, provide free access for some articles, or after a certain time (after 6 months or a year) to complete content. The search for such journals provide the network archive as High Wire Press, Free Medical Journals.com. It is necessary to allocate PubMed and PubMed Central, the first public digital archives unlimited collect journals of available medical literature, which operates in the system of the National Library of Medicine in Bethesda (USA). There are so called on- line medical journals published only in electronic form. It could be searched over on-line databases. In this paper authors shortly described about 30 data bases and short instructions how to make access and search the published papers in indexed medical journals. PMID:23322957
Rate-gyro-integral constraint for ambiguity resolution in GNSS attitude determination applications.
Zhu, Jiancheng; Li, Tao; Wang, Jinling; Hu, Xiaoping; Wu, Meiping
2013-06-21
In the field of Global Navigation Satellite System (GNSS) attitude determination, the constraints usually play a critical role in resolving the unknown ambiguities quickly and correctly. Many constraints such as the baseline length, the geometry of multi-baselines and the horizontal attitude angles have been used extensively to improve the performance of ambiguity resolution. In the GNSS/Inertial Navigation System (INS) integrated attitude determination systems using low grade Inertial Measurement Unit (IMU), the initial heading parameters of the vehicle are usually worked out by the GNSS subsystem instead of by the IMU sensors independently. However, when a rotation occurs, the angle at which vehicle has turned within a short time span can be measured accurately by the IMU. This measurement will be treated as a constraint, namely the rate-gyro-integral constraint, which can aid the GNSS ambiguity resolution. We will use this constraint to filter the candidates in the ambiguity search stage. The ambiguity search space shrinks significantly with this constraint imposed during the rotation, thus it is helpful to speeding up the initialization of attitude parameters under dynamic circumstances. This paper will only study the applications of this new constraint to land vehicles. The impacts of measurement errors on the effect of this new constraint will be assessed for different grades of IMU and current average precision level of GNSS receivers. Simulations and experiments in urban areas have demonstrated the validity and efficacy of the new constraint in aiding GNSS attitude determinations.
ERIC Educational Resources Information Center
Gao, Yuan; Liu, Tzu-Chien; Paas, Fred
2016-01-01
This study compared the effects of effortless selection of target plants using quick respond (QR) code technology to effortful manual search and selection of target plants on learning about plants in a mobile device supported learning environment. In addition, it was investigated whether the effectiveness of the 2 selection methods was…
Interdisciplinary glossary — particle accelerators and medicine
NASA Astrophysics Data System (ADS)
Dmitrieva, V. V.; Dyubkov, V. S.; Nikitaev, V. G.; Ulin, S. E.
2016-02-01
A general concept of a new interdisciplinary glossary, which includes particle accelerator terminology used in medicine, as well as relevant medical concepts, is presented. Its structure and usage rules are described. An example, illustrating the quickly searching technique of relevant information in this Glossary, is considered. A website address, where one can get an access to the Glossary, is specified. Glossary can be refined and supplemented.
Expertise in complex decision making: the role of search in chess 70 years after de Groot.
Connors, Michael H; Burns, Bruce D; Campitelli, Guillermo
2011-01-01
One of the most influential studies in all expertise research is de Groot's (1946) study of chess players, which suggested that pattern recognition, rather than search, was the key determinant of expertise. Many changes have occurred in the chess world since de Groot's study, leading some authors to argue that the cognitive mechanisms underlying expertise have also changed. We decided to replicate de Groot's study to empirically test these claims and to examine whether the trends in the data have changed over time. Six Grandmasters, five International Masters, six Experts, and five Class A players completed the think-aloud procedure for two chess positions. Findings indicate that Grandmasters and International Masters search more quickly than Experts and Class A players, and that both groups today search substantially faster than players in previous studies. The findings, however, support de Groot's overall conclusions and are consistent with predictions made by pattern recognition models. Copyright © 2011 Cognitive Science Society, Inc.
The time course of attentional deployment in contextual cueing.
Jiang, Yuhong V; Sigstad, Heather M; Swallow, Khena M
2013-04-01
The time course of attention is a major characteristic on which different types of attention diverge. In addition to explicit goals and salient stimuli, spatial attention is influenced by past experience. In contextual cueing, behaviorally relevant stimuli are more quickly found when they appear in a spatial context that has previously been encountered than when they appear in a new context. In this study, we investigated the time that it takes for contextual cueing to develop following the onset of search layout cues. In three experiments, participants searched for a T target in an array of Ls. Each array was consistently associated with a single target location. In a testing phase, we manipulated the stimulus onset asynchrony (SOA) between the repeated spatial layout and the search display. Contextual cueing was equivalent for a wide range of SOAs between 0 and 1,000 ms. The lack of an increase in contextual cueing with increasing cue durations suggests that as an implicit learning mechanism, contextual cueing cannot be effectively used until search begins.
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T.; Prokosch, U.
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval. PMID:10566463
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T; Prokosch, U
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval.
The GEOSS Clearinghouse based on the GeoNetwork opensource
NASA Astrophysics Data System (ADS)
Liu, K.; Yang, C.; Wu, H.; Huang, Q.
2010-12-01
The Global Earth Observation System of Systems (GEOSS) is established to support the study of the Earth system in a global community. It provides services for social management, quick response, academic research, and education. The purpose of GEOSS is to achieve comprehensive, coordinated and sustained observations of the Earth system, improve monitoring of the state of the Earth, increase understanding of Earth processes, and enhance prediction of the behavior of the Earth system. In 2009, GEO called for a competition for an official GEOSS clearinghouse to be selected as a source to consolidating catalogs for Earth observations. The Joint Center for Intelligent Spatial Computing at George Mason University worked with USGS to submit a solution based on the open-source platform - GeoNetwork. In the spring of 2010, the solution is selected as the product for GEOSS clearinghouse. The GEOSS Clearinghouse is a common search facility for the Intergovernmental Group on Ea rth Observation (GEO). By providing a list of harvesting functions in Business Logic, GEOSS clearinghouse can collect metadata from distributed catalogs including other GeoNetwork native nodes, webDAV/sitemap/WAF, catalog services for the web (CSW)2.0, GEOSS Component and Service Registry (http://geossregistries.info/), OGC Web Services (WCS, WFS, WMS and WPS), OAI Protocol for Metadata Harvesting 2.0, ArcSDE Server and Local File System. Metadata in GEOSS clearinghouse are managed in a database (MySQL, Postgresql, Oracle, or MckoiDB) and an index of the metadata is maintained through Lucene engine. Thus, EO data, services, and related resources can be discovered and accessed. It supports a variety of geospatial standards including CSW and SRU for search, FGDC and ISO metadata, and WMS related OGC standards for data access and visualization, as linked from the metadata.
COUGHLAN, JAMES; MANDUCHI, ROBERTO
2009-01-01
We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users. PMID:19960101
Coughlan, James; Manduchi, Roberto
2009-06-01
We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users.
Mobile medical image retrieval
NASA Astrophysics Data System (ADS)
Duc, Samuel; Depeursinge, Adrien; Eggel, Ivan; Müller, Henning
2011-03-01
Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has gained ground and many applications have been explored. This creates a new field of mobile information search & access and in this context images can play an important role as they often allow understanding complex scenarios much quicker and easier than free text. Mobile information retrieval in general has skyrocketed over the past year with many new applications and tools being developed and all sorts of interfaces being adapted to mobile clients. This article describes constraints of an information retrieval system including visual and textual information retrieval from the medical literature of BioMedCentral and of the RSNA journals Radiology and Radiographics. Solutions for mobile data access with an example on an iPhone in a web-based environment are presented as iPhones are frequently used and the operating system is bound to become the most frequent smartphone operating system in 2011. A web-based scenario was chosen to allow for a use by other smart phone platforms such as Android as well. Constraints of small screens and navigation with touch screens are taken into account in the development of the application. A hybrid choice had to be taken to allow for taking pictures with the cell phone camera and upload them for visual similarity search as most producers of smart phones block this functionality to web applications. Mobile information access and in particular access to images can be surprisingly efficient and effective on smaller screens. Images can be read on screen much faster and relevance of documents can be identified quickly through the use of images contained in the text. Problems with the many, often incompatible mobile platforms were discovered and are listed in the text. Mobile information access is a quickly growing domain and the constraints of mobile access also need to be taken into account for image retrieval. The demonstrated access to the medical literature is most relevant as the medical literature and their images are clearly the largest knowledge source in the medical field.
NASA Technical Reports Server (NTRS)
Baumback, J. I.; Davies, A. N.; Vonirmer, A.; Lampen, P. H.
1995-01-01
To assist peak assignment in ion mobility spectrometry it is important to have quality reference data. The reference collection should be stored in a database system which is capable of being searched using spectral or substance information. We propose to build such a database customized for ion mobility spectra. To start off with it is important to quickly reach a critical mass of data in the collection. We wish to obtain as many spectra combined with their IMS parameters as possible. Spectra suppliers will be rewarded for their participation with access to the database. To make the data exchange between users and system administration possible, it is important to define a file format specially made for the requirements of ion mobility spectra. The format should be computer readable and flexible enough for extensive comments to be included. In this document we propose a data exchange format, and we would like you to give comments on it. For the international data exchange it is important, to have a standard data exchange format. We propose to base the definition of this format on the JCAMP-DX protocol, which was developed for the exchange of infrared spectra. This standard made by the Joint Committee on Atomic and Molecular Physical Data is of a flexible design. The aim of this paper is to adopt JCAMP-DX to the special requirements of ion mobility spectra.
46 CFR 153.284 - Characteristics of required quick closing valves.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 5 2011-10-01 2011-10-01 false Characteristics of required quick closing valves. 153.284 Section 153.284 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK... and Equipment Piping Systems and Cargo Handling Equipment § 153.284 Characteristics of required quick...
46 CFR 153.284 - Characteristics of required quick closing valves.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Characteristics of required quick closing valves. 153.284 Section 153.284 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK... and Equipment Piping Systems and Cargo Handling Equipment § 153.284 Characteristics of required quick...
Quick Prototyping of Educational Software: An Object-Oriented Approach.
ERIC Educational Resources Information Center
Wong, Simon C-H
1994-01-01
Introduces and demonstrates a quick-prototyping model for educational software development that can be used by teachers developing their own courseware using an object-oriented programming system. Development of a courseware package called "The Match-Maker" is explained as an example that uses HyperCard for quick prototyping. (Contains…
Overview of Nuclear Physics Data: Databases, Web Applications and Teaching Tools
NASA Astrophysics Data System (ADS)
McCutchan, Elizabeth
2017-01-01
The mission of the United States Nuclear Data Program (USNDP) is to provide current, accurate, and authoritative data for use in pure and applied areas of nuclear science and engineering. This is accomplished by compiling, evaluating, and disseminating extensive datasets. Our main products include the Evaluated Nuclear Structure File (ENSDF) containing information on nuclear structure and decay properties and the Evaluated Nuclear Data File (ENDF) containing information on neutron-induced reactions. The National Nuclear Data Center (NNDC), through the website www.nndc.bnl.gov, provides web-based retrieval systems for these and many other databases. In addition, the NNDC hosts several on-line physics tools, useful for calculating various quantities relating to basic nuclear physics. In this talk, I will first introduce the quantities which are evaluated and recommended in our databases. I will then outline the searching capabilities which allow one to quickly and efficiently retrieve data. Finally, I will demonstrate how the database searches and web applications can provide effective teaching tools concerning the structure of nuclei and how they interact. Work supported by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radousky, H B
This months issue has the following articles: (1) Innovative Solutions Reap Rewards--Commentary by George H. Miller; (2) Surveillance on the Fly--An airborne surveillance system can track up to 8,000 moving objects in an area the size of a small city; (3) A Detector Radioactive Particles Can't Evade--An ultrahigh-resolution spectrometer can detect the minute thermal energy deposited by a single gamma ray or neutron; (4) Babel Speeds Communication among Programming Languages--The Babel program allows software applications in different programming languages to communicate quickly; (5) A Gem of a Software Tool--The data-mining software Sapphire allows scientists to analyze enormous data sets generatedmore » by diverse applications; (6) Interferometer Improves the Search for Planets--With externally dispersed interferometry, astronomers can use an inexpensive, compact instrument to search for distant planets; (7) Efficiently Changing the Color of Laser Light--Yttrium-calcium-oxyborate crystals provide an efficient, compact approach to wavelength conversion for high-average-power lasers; (8) Pocket-Sized Test Detects Trace Explosives--A detection kit sensitive to more than 30 explosives provides an inexpensive, easy-to-use tool for security forces everywhere; (9) Tailor-Made Microdevices Serve Big Needs--The Center for Micro- and Nanotechnology develops tiny devices for national security.« less
Freedom and Security — Responses to the Threat of International Terrorism
NASA Astrophysics Data System (ADS)
Tinnefeld, Marie-Theres
The September 11 attacs have led to a number of changes in the legislative framework of the EU member states. Governments intended to react quickly, powerfully and with high public visibility reactions in public to justify the power of technology in the interests of national security. The new goal is to search terrorist activity in the ocean of telecommunications data retained by communications providers and accessed by intelligence authorities. EU member states have to put in place a national data retention law by March 2009. In Germany, the most recent problem is the question of the legality of the secret online-surveillance and search of IT-Sytems, especially concerning of individual’s PCs. The German Federal Constitutional Court has held, that the area of governmental authority for intervention must be limited by the constitutional protection of human dignity and fundamental rights like information privacy, telecommunications secrecy and respect for the home. In February 2008 the highest German Court created a new human right of confidentially and integrity of IT-Systems. The decision has to be understood as a reaction to the widespread use of invisible information technology by legal authorities and their secret and comprehensive surveillance of the citizens.
Mathematical Foundation for Plane Covering Using Hexagons
NASA Technical Reports Server (NTRS)
Johnson, Gordon G.
1999-01-01
This work is to indicate the development and mathematical underpinnings of the algorithms previously developed for covering the plane and the addressing of the elements of the covering. The algorithms are of interest in that they provides a simple systematic way of increasing or decreasing resolution, in the sense that if we have the covering in place and there is an image superimposed upon the covering, then we may view the image in a rough form or in a very detailed form with minimal effort. Such ability allows for quick searches of crude forms to determine a class in which to make a detailed search. In addition, the addressing algorithms provide an efficient way to process large data sets that have related subsets. The algorithms produced were based in part upon the work of D. Lucas "A Multiplication in N Space" which suggested a set of three vectors, any two of which would serve as a bases for the plane and also that the hexagon is the natural geometric object to be used in a covering with a suggested bases. The second portion is a refinement of the eyeball vision system, the globular viewer.
Inhibitors for Androgen Receptor Activation Surfaces
2006-09-01
Thyroid hormone, 3,5,3′-triiodo-L-thyronine (T3; Sigma) P R O T O C O L www.stke.org/cgi/content/full/sigtrans;2006/341/pl3 Page 4 TNT T7 quick coupled...SRC2 full-length protein was obtained by using a TNT T7 quick coupled transcrip- tion/translation system. Transformation and preparation of bacterial...is conducted in triplicate for T3 and hit compounds. 1. Produce full-length hTRβ using a TNT T7 quick-coupled transcription translation system
Keegan, Ronan M.; Bibby, Jaclyn; Thomas, Jens; Xu, Dong; Zhang, Yang; Mayans, Olga; Winn, Martyn D.; Rigden, Daniel J.
2015-01-01
AMPLE clusters and truncates ab initio protein structure predictions, producing search models for molecular replacement. Here, an interesting degree of complementarity is shown between targets solved using the different ab initio modelling programs QUARK and ROSETTA. Search models derived from either program collectively solve almost all of the all-helical targets in the test set. Initial solutions produced by Phaser after only 5 min perform surprisingly well, improving the prospects for in situ structure solution by AMPLE during synchrotron visits. Taken together, the results show the potential for AMPLE to run more quickly and successfully solve more targets than previously suspected. PMID:25664744
NASA Astrophysics Data System (ADS)
Olender, M.; Krenczyk, D.
2016-08-01
Modern enterprises have to react quickly to dynamic changes in the market, due to changing customer requirements and expectations. One of the key area of production management, that must continuously evolve by searching for new methods and tools for increasing the efficiency of manufacturing systems is the area of production flow planning and control. These aspects are closely connected with the ability to implement the concept of Virtual Enterprises (VE) and Virtual Manufacturing Network (VMN) in which integrated infrastructure of flexible resources are created. In the proposed approach, the players role perform the objects associated with the objective functions, allowing to solve the multiobjective production flow planning problems based on the game theory, which is based on the theory of the strategic situation. For defined production system and production order models ways of solving the problem of production route planning in VMN on computational examples for different variants of production flow is presented. Possible decision strategy to use together with an analysis of calculation results is shown.
BioMart Central Portal: an open database network for the biological community
Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Génova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S.; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R.; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J.; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S.; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B.; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J.; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D. T.; Wong-Erasmus, Marie; Yao, L.; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek
2011-01-01
BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities. Database URL: http://central.biomart.org. PMID:21930507
Spiral: Automated Computing for Linear Transforms
NASA Astrophysics Data System (ADS)
Püschel, Markus
2010-09-01
Writing fast software has become extraordinarily difficult. For optimal performance, programs and their underlying algorithms have to be adapted to take full advantage of the platform's parallelism, memory hierarchy, and available instruction set. To make things worse, the best implementations are often platform-dependent and platforms are constantly evolving, which quickly renders libraries obsolete. We present Spiral, a domain-specific program generation system for important functionality used in signal processing and communication including linear transforms, filters, and other functions. Spiral completely replaces the human programmer. For a desired function, Spiral generates alternative algorithms, optimizes them, compiles them into programs, and intelligently searches for the best match to the computing platform. The main idea behind Spiral is a mathematical, declarative, domain-specific framework to represent algorithms and the use of rewriting systems to generate and optimize algorithms at a high level of abstraction. Experimental results show that the code generated by Spiral competes with, and sometimes outperforms, the best available human-written code.
Search strategy using LHC pileup interactions as a zero bias sample
NASA Astrophysics Data System (ADS)
Nachman, Benjamin; Rubbo, Francesco
2018-05-01
Due to a limited bandwidth and a large proton-proton interaction cross section relative to the rate of interesting physics processes, most events produced at the Large Hadron Collider (LHC) are discarded in real time. A sophisticated trigger system must quickly decide which events should be kept and is very efficient for a broad range of processes. However, there are many processes that cannot be accommodated by this trigger system. Furthermore, there may be models of physics beyond the standard model (BSM) constructed after data taking that could have been triggered, but no trigger was implemented at run time. Both of these cases can be covered by exploiting pileup interactions as an effective zero bias sample. At the end of high-luminosity LHC operations, this zero bias dataset will have accumulated about 1 fb-1 of data from which a bottom line cross section limit of O (1 ) fb can be set for BSM models already in the literature and those yet to come.
Dish layouts analysis method for concentrative solar power plant.
Xu, Jinshan; Gan, Shaocong; Li, Song; Ruan, Zhongyuan; Chen, Shengyong; Wang, Yong; Gui, Changgui; Wan, Bin
2016-01-01
Designs leading to maximize the use of sun radiation of a given reflective area without increasing the expense on investment are important to solar power plants construction. We here provide a method that allows one to compute shade area at any given time as well as the total shading effect of a day. By establishing a local coordinate system with the origin at the apex of a parabolic dish and z -axis pointing to the sun, neighboring dishes only with [Formula: see text] would shade onto the dish when in tracking mode. This procedure reduces the required computational resources, simplifies the calculation and allows a quick search for the optimum layout by considering all aspects leading to optimized arrangement: aspect ratio, shifting and rotation. Computer simulations done with information on dish Stirling system as well as DNI data released from NREL, show that regular-spacing is not an optimal layout, shifting and rotating column by certain amount can bring more benefits.
Extended Phase-Space Methods for Enhanced Sampling in Molecular Simulations: A Review.
Fujisaki, Hiroshi; Moritsugu, Kei; Matsunaga, Yasuhiro; Morishita, Tetsuya; Maragliano, Luca
2015-01-01
Molecular Dynamics simulations are a powerful approach to study biomolecular conformational changes or protein-ligand, protein-protein, and protein-DNA/RNA interactions. Straightforward applications, however, are often hampered by incomplete sampling, since in a typical simulated trajectory the system will spend most of its time trapped by high energy barriers in restricted regions of the configuration space. Over the years, several techniques have been designed to overcome this problem and enhance space sampling. Here, we review a class of methods that rely on the idea of extending the set of dynamical variables of the system by adding extra ones associated to functions describing the process under study. In particular, we illustrate the Temperature Accelerated Molecular Dynamics (TAMD), Logarithmic Mean Force Dynamics (LogMFD), and Multiscale Enhanced Sampling (MSES) algorithms. We also discuss combinations with techniques for searching reaction paths. We show the advantages presented by this approach and how it allows to quickly sample important regions of the free-energy landscape via automatic exploration.
Design of Clinical Support Systems Using Integrated Genetic Algorithm and Support Vector Machine
NASA Astrophysics Data System (ADS)
Chen, Yung-Fu; Huang, Yung-Fa; Jiang, Xiaoyi; Hsu, Yuan-Nian; Lin, Hsuan-Hung
Clinical decision support system (CDSS) provides knowledge and specific information for clinicians to enhance diagnostic efficiency and improving healthcare quality. An appropriate CDSS can highly elevate patient safety, improve healthcare quality, and increase cost-effectiveness. Support vector machine (SVM) is believed to be superior to traditional statistical and neural network classifiers. However, it is critical to determine suitable combination of SVM parameters regarding classification performance. Genetic algorithm (GA) can find optimal solution within an acceptable time, and is faster than greedy algorithm with exhaustive searching strategy. By taking the advantage of GA in quickly selecting the salient features and adjusting SVM parameters, a method using integrated GA and SVM (IGS), which is different from the traditional method with GA used for feature selection and SVM for classification, was used to design CDSSs for prediction of successful ventilation weaning, diagnosis of patients with severe obstructive sleep apnea, and discrimination of different cell types form Pap smear. The results show that IGS is better than methods using SVM alone or linear discriminator.
BioMart Central Portal: an open database network for the biological community.
Guberman, Jonathan M; Ai, J; Arnaiz, O; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J; Di Génova, A; Forbes, Simon; Fujisawa, T; Gadaleta, E; Goodstein, D M; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D T; Wong-Erasmus, Marie; Yao, L; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek
2011-01-01
BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities.
Failures of Perception in the Low-Prevalence Effect: Evidence From Active and Passive Visual Search
Hout, Michael C.; Walenchok, Stephen C.; Goldinger, Stephen D.; Wolfe, Jeremy M.
2017-01-01
In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%–34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073
Comparison of methods for the detection of gravitational waves from unknown neutron stars
NASA Astrophysics Data System (ADS)
Walsh, S.; Pitkin, M.; Oliver, M.; D'Antonio, S.; Dergachev, V.; Królak, A.; Astone, P.; Bejger, M.; Di Giovanni, M.; Dorosh, O.; Frasca, S.; Leaci, P.; Mastrogiovanni, S.; Miller, A.; Palomba, C.; Papa, M. A.; Piccinni, O. J.; Riles, K.; Sauter, O.; Sintes, A. M.
2016-12-01
Rapidly rotating neutron stars are promising sources of continuous gravitational wave radiation for the LIGO and Virgo interferometers. The majority of neutron stars in our galaxy have not been identified with electromagnetic observations. All-sky searches for isolated neutron stars offer the potential to detect gravitational waves from these unidentified sources. The parameter space of these blind all-sky searches, which also cover a large range of frequencies and frequency derivatives, presents a significant computational challenge. Different methods have been designed to perform these searches within acceptable computational limits. Here we describe the first benchmark in a project to compare the search methods currently available for the detection of unknown isolated neutron stars. The five methods compared here are individually referred to as the PowerFlux, sky Hough, frequency Hough, Einstein@Home, and time domain F -statistic methods. We employ a mock data challenge to compare the ability of each search method to recover signals simulated assuming a standard signal model. We find similar performance among the four quick-look search methods, while the more computationally intensive search method, Einstein@Home, achieves up to a factor of two higher sensitivity. We find that the absence of a second derivative frequency in the search parameter space does not degrade search sensitivity for signals with physically plausible second derivative frequencies. We also report on the parameter estimation accuracy of each search method, and the stability of the sensitivity in frequency and frequency derivative and in the presence of detector noise.
Current algorithmic solutions for peptide-based proteomics data generation and identification.
Hoopmann, Michael R; Moritz, Robert L
2013-02-01
Peptide-based proteomic data sets are ever increasing in size and complexity. These data sets provide computational challenges when attempting to quickly analyze spectra and obtain correct protein identifications. Database search and de novo algorithms must consider high-resolution MS/MS spectra and alternative fragmentation methods. Protein inference is a tricky problem when analyzing large data sets of degenerate peptide identifications. Combining multiple algorithms for improved peptide identification puts significant strain on computational systems when investigating large data sets. This review highlights some of the recent developments in peptide and protein identification algorithms for analyzing shotgun mass spectrometry data when encountering the aforementioned hurdles. Also explored are the roles that analytical pipelines, public spectral libraries, and cloud computing play in the evolution of peptide-based proteomics. Copyright © 2012 Elsevier Ltd. All rights reserved.
ABMapper: a suffix array-based tool for multi-location searching and splice-junction mapping.
Lou, Shao-Ke; Ni, Bing; Lo, Leung-Yau; Tsui, Stephen Kwok-Wing; Chan, Ting-Fung; Leung, Kwong-Sak
2011-02-01
Sequencing reads generated by RNA-sequencing (RNA-seq) must first be mapped back to the genome through alignment before they can be further analyzed. Current fast and memory-saving short-read mappers could give us a quick view of the transcriptome. However, they are neither designed for reads that span across splice junctions nor for repetitive reads, which can be mapped to multiple locations in the genome (multi-reads). Here, we describe a new software package: ABMapper, which is specifically designed for exploring all putative locations of reads that are mapped to splice junctions or repetitive in nature. The software is freely available at: http://abmapper.sourceforge.net/. The software is written in C++ and PERL. It runs on all major platforms and operating systems including Windows, Mac OS X and LINUX.
Patel, Manesh R; Schardt, Connie M; Sanders, Linda L; Keitz, Sheri A
2006-10-01
The paper compares the speed, validity, and applicability of two different protocols for searching the primary medical literature. A randomized trial involving medicine residents was performed. An inpatient general medicine rotation was used. Thirty-two internal medicine residents were block randomized into four groups of eight. Success rate of each search protocol was measured by perceived search time, number of questions answered, and proportion of articles that were applicable and valid. Residents randomized to the MEDLINE-first (protocol A) group searched 120 questions, and residents randomized to the MEDLINE-last (protocol B) searched 133 questions. In protocol A, 104 answers (86.7%) and, in protocol B, 117 answers (88%) were found to clinical questions. In protocol A, residents reported that 26 (25.2%) of the answers were obtained quickly or rated as "fast" (<5 minutes) as opposed to 55 (51.9%) in protocol B, (P = 0.0004). A subset of questions and articles (n = 79) were reviewed by faculty who found that both protocols identified similar numbers of answer articles that addressed the questions and were felt to be valid using critical appraisal criteria. For resident-generated clinical questions, both protocols produced a similarly high percentage of applicable and valid articles. The MEDLINE-last search protocol was perceived to be faster. However, in the MEDLINE-last protocol, a significant portion of questions (23%) still required searching MEDLINE to find an answer.
The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes
NASA Technical Reports Server (NTRS)
Remington, R.; Williams, D.
1984-01-01
Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.
Proceedings of the 2004 High Spatial Resolution Commercial Imagery Workshop
NASA Technical Reports Server (NTRS)
2006-01-01
Topics covered include: NASA Applied Sciences Program; USGS Land Remote Sensing: Overview; QuickBird System Status and Product Overview; ORBIMAGE Overview; IKONOS 2004 Calibration and Validation Status; OrbView-3 Spatial Characterization; On-Orbit Modulation Transfer Function (MTF) Measurement of QuickBird; Spatial Resolution Characterization for QuickBird Image Products 2003-2004 Season; Image Quality Evaluation of QuickBird Super Resolution and Revisit of IKONOS: Civil and Commercial Application Project (CCAP); On-Orbit System MTF Measurement; QuickBird Post Launch Geopositional Characterization Update; OrbView-3 Geometric Calibration and Geopositional Accuracy; Geopositional Statistical Methods; QuickBird and OrbView-3 Geopositional Accuracy Assessment; Initial On-Orbit Spatial Resolution Characterization of OrbView-3 Panchromatic Images; Laboratory Measurement of Bidirectional Reflectance of Radiometric Tarps; Stennis Space Center Verification and Validation Capabilities; Joint Agency Commercial Imagery Evaluation (JACIE) Team; Adjacency Effects in High Resolution Imagery; Effect of Pulse Width vs. GSD on MTF Estimation; Camera and Sensor Calibration at the USGS; QuickBird Geometric Verification; Comparison of MODTRAN to Heritage-based Results in Vicarious Calibration at University of Arizona; Using Remotely Sensed Imagery to Determine Impervious Surface in Sioux Falls, South Dakota; Estimating Sub-Pixel Proportions of Sagebrush with a Regression Tree; How Do YOU Use the National Land Cover Dataset?; The National Map Hazards Data Distribution System; Recording a Troubled World; What Does This-Have to Do with This?; When Can a Picture Save a Thousand Homes?; InSAR Studies of Alaska Volcanoes; Earth Observing-1 (EO-1) Data Products; Improving Access to the USGS Aerial Film Collections: High Resolution Scanners; Improving Access to the USGS Aerial Film Collections: Phoenix Digitizing System Product Distribution; System and Product Characterization: Issues Approach; Innovative Approaches to Analysis of Lidar Data for the National Map; Changes in Imperviousness near Military Installations; Geopositional Accuracy Evaluations of QuickBird and OrbView-3: Civil and Commercial Applications Project (CCAP); Geometric Accuracy Assessment: OrbView ORTHO Products; QuickBird Radiometric Calibration Update; OrbView-3 Radiometric Calibration; QuickBird Radiometric Characterization; NASA Radiometric Characterization; Establishing and Verifying the Traceability of Remote-Sensing Measurements to International Standards; QuickBird Applications; Airport Mapping and Perpetual Monitoring Using IKONOS; OrbView-3 Relative Accuracy Results and Impacts on Exploitation and Accuracy Improvement; Using Remotely Sensed Imagery to Determine Impervious Surface in Sioux Falls, South Dakota; Applying High-Resolution Satellite Imagery and Remotely Sensed Data to Local Government Applications: Sioux Falls, South Dakota; Automatic Co-Registration of QuickBird Data for Change Detection Applications; Developing Coastal Surface Roughness Maps Using ASTER and QuickBird Data Sources; Automated, Near-Real Time Cloud and Cloud Shadow Detection in High Resolution VNIR Imagery; Science Applications of High Resolution Imagery at the USGS EROS Data Center; Draft Plan for Characterizing Commercial Data Products in Support of Earth Science Research; Atmospheric Correction Prototype Algorithm for High Spatial Resolution Multispectral Earth Observing Imaging Systems; Determining Regional Arctic Tundra Carbon Exchange: A Bottom-Up Approach; Using IKONOS Imagery to Assess Impervious Surface Area, Riparian Buffers and Stream Health in the Mid-Atlantic Region; Commercial Remote Sensing Space Policy Civil Implementation Update; USGS Commercial Remote Sensing Data Contracts (CRSDC); and Commercial Remote Sensing Space Policy (CRSSP): Civil Near-Term Requirements Collection Update.
ERIC Educational Resources Information Center
National Center for Education Statistics (ED), Washington, DC.
This leaflet is a guide to data resources on the Internet related to education. The first Web site listed, http://nces.ed.gov/globallocator/, allows the user to search for public and private elementary and secondary schools by name, city, state, or zip code. The second site, "The Students' Classroom," offers information on a range of…
Mobile Phone Based System Opportunities to Home-based Managing of Chemotherapy Side Effects.
Davoodi, Somayeh; Mohammadzadeh, Zeinab; Safdari, Reza
2016-06-01
Applying mobile base systems in cancer care especially in chemotherapy management have remarkable growing in recent decades. Because chemotherapy side effects have significant influences on patient's lives, therefore it is necessary to take ways to control them. This research has studied some experiences of using mobile phone based systems to home-based monitor of chemotherapy side effects in cancer. In this literature review study, search was conducted with keywords like cancer, chemotherapy, mobile phone, information technology, side effects and self managing, in Science Direct, Google Scholar and Pub Med databases since 2005. Today, because of the growing trend of the cancer, we need methods and innovations such as information technology to manage and control it. Mobile phone based systems are the solutions that help to provide quick access to monitor chemotherapy side effects for cancer patients at home. Investigated studies demonstrate that using of mobile phones in chemotherapy management have positive results and led to patients and clinicians satisfactions. This study shows that the mobile phone system for home-based monitoring chemotherapy side effects works well. In result, knowledge of cancer self-management and the rate of patient's effective participation in care process improved.
NCBI GEO: archive for functional genomics data sets—10 years on
Barrett, Tanya; Troup, Dennis B.; Wilhite, Stephen E.; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F.; Tomashevsky, Maxim; Marshall, Kimberly A.; Phillippy, Katherine H.; Sherman, Patti M.; Muertter, Rolf N.; Holko, Michelle; Ayanbule, Oluwabukunmi; Yefanov, Andrey; Soboleva, Alexandra
2011-01-01
A decade ago, the Gene Expression Omnibus (GEO) database was established at the National Center for Biotechnology Information (NCBI). The original objective of GEO was to serve as a public repository for high-throughput gene expression data generated mostly by microarray technology. However, the research community quickly applied microarrays to non-gene-expression studies, including examination of genome copy number variation and genome-wide profiling of DNA-binding proteins. Because the GEO database was designed with a flexible structure, it was possible to quickly adapt the repository to store these data types. More recently, as the microarray community switches to next-generation sequencing technologies, GEO has again adapted to host these data sets. Today, GEO stores over 20 000 microarray- and sequence-based functional genomics studies, and continues to handle the majority of direct high-throughput data submissions from the research community. Multiple mechanisms are provided to help users effectively search, browse, download and visualize the data at the level of individual genes or entire studies. This paper describes recent database enhancements, including new search and data representation tools, as well as a brief review of how the community uses GEO data. GEO is freely accessible at http://www.ncbi.nlm.nih.gov/geo/. PMID:21097893
Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions
NASA Astrophysics Data System (ADS)
Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard
2017-12-01
Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).
Increased ISR operator capability utilizing a centralized 360° full motion video display
NASA Astrophysics Data System (ADS)
Andryc, K.; Chamberlain, J.; Eagleson, T.; Gottschalk, G.; Kowal, B.; Kuzdeba, P.; LaValley, D.; Myers, E.; Quinn, S.; Rose, M.; Rusiecki, B.
2012-06-01
In many situations, the difference between success and failure comes down to taking the right actions quickly. While the myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a contextualized centralized display of information and intuitive human interface can help to support the quick and effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a full motion video real-time 360 degree imaging display. The system described could function as a primary display system for command and control in security, military and observation posts. It has the ability to process and enable interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations (RWS).
NASA Astrophysics Data System (ADS)
Czajkowski, M.; Shilliday, A.; LoFaso, N.; Dipon, A.; Van Brackle, D.
2016-09-01
In this paper, we describe and depict the Defense Advanced Research Projects Agency (DARPA)'s OrbitOutlook Data Archive (OODA) architecture. OODA is the infrastructure that DARPA's OrbitOutlook program has developed to integrate diverse data from various academic, commercial, government, and amateur space situational awareness (SSA) telescopes. At the heart of the OODA system is its world model - a distributed data store built to quickly query big data quantities of information spread out across multiple processing nodes and data centers. The world model applies a multi-index approach where each index is a distinct view on the data. This allows for analysts and analytics (algorithms) to access information through queries with a variety of terms that may be of interest to them. Our indices include: a structured global-graph view of knowledge, a keyword search of data content, an object-characteristic range search, and a geospatial-temporal orientation of spatially located data. In addition, the world model applies a federated approach by connecting to existing databases and integrating them into one single interface as a "one-stop shopping place" to access SSA information. In addition to the world model, OODA provides a processing platform for various analysts to explore and analytics to execute upon this data. Analytic algorithms can use OODA to take raw data and build information from it. They can store these products back into the world model, allowing analysts to gain situational awareness with this information. Analysts in turn would help decision makers use this knowledge to address a wide range of SSA problems. OODA is designed to make it easy for software developers who build graphical user interfaces (GUIs) and algorithms to quickly get started with working with this data. This is done through a multi-language software development kit that includes multiple application program interfaces (APIs) and a data model with SSA concepts and terms such as: space observation, observable, measurable, metadata, track, space object, catalog, expectation, and maneuver.
Fast Laser Holographic Interferometry For Wind Tunnels
NASA Technical Reports Server (NTRS)
Lee, George
1989-01-01
Proposed system makes holographic interferograms quickly in wind tunnels. Holograms reveal two-dimensional flows around airfoils and provide information on distributions of pressure, structures of wake and boundary layers, and density contours of flow fields. Holograms form quickly in thermoplastic plates in wind tunnel. Plates rigid and left in place so neither vibrations nor photgraphic-development process degrades accuracy of holograms. System processes and analyzes images quickly. Semiautomatic micro-computer-based desktop image-processing unit now undergoing development moves easily to wind tunnel, and its speed and memory adequate for flows about airfoils.
QuickCash: Secure Transfer Payment Systems
Alhothaily, Abdulrahman; Alrawais, Arwa; Song, Tianyi; Lin, Bin; Cheng, Xiuzhen
2017-01-01
Payment systems play a significant role in our daily lives. They are an important driver of economic activities and a vital part of the banking infrastructure of any country. Several current payment systems focus on security and reliability but pay less attention to users’ needs and behaviors. For example, people may share their bankcards with friends or relatives to withdraw money for various reasons. This behavior can lead to a variety of privacy and security issues since the cardholder has to share a bankcard and other sensitive information such as a personal identification number (PIN). In addition, it is commonplace that cardholders may lose their cards, and may not be able to access their accounts due to various reasons. Furthermore, transferring money to an individual who has lost their bankcard and identification information is not a straightforward task. A user-friendly person-to-person payment system is urgently needed to perform secure and reliable transactions that benefit from current technological advancements. In this paper, we propose two secure fund transfer methods termed QuickCash Online and QuickCash Offline to transfer money from peer to peer using the existing banking infrastructure. Our methods provide a convenient way to transfer money quickly, and they do not require using bank cards or any identification card. Unlike other person-to-person payment systems, the proposed methods do not require the receiving entity to have a bank account, or to perform any registration procedure. We implement our QuickCash payment systems and analyze their security strengths and properties. PMID:28608846
QuickCash: Secure Transfer Payment Systems.
Alhothaily, Abdulrahman; Alrawais, Arwa; Song, Tianyi; Lin, Bin; Cheng, Xiuzhen
2017-06-13
Payment systems play a significant role in our daily lives. They are an important driver of economic activities and a vital part of the banking infrastructure of any country. Several current payment systems focus on security and reliability but pay less attention to users' needs and behaviors. For example, people may share their bankcards with friends or relatives to withdraw money for various reasons. This behavior can lead to a variety of privacy and security issues since the cardholder has to share a bankcard and other sensitive information such as a personal identification number (PIN). In addition, it is commonplace that cardholders may lose their cards, and may not be able to access their accounts due to various reasons. Furthermore, transferring money to an individual who has lost their bankcard and identification information is not a straightforward task. A user-friendly person-to-person payment system is urgently needed to perform secure and reliable transactions that benefit from current technological advancements. In this paper, we propose two secure fund transfer methods termed QuickCash Online and QuickCash Offline to transfer money from peer to peer using the existing banking infrastructure. Our methods provide a convenient way to transfer money quickly, and they do not require using bank cards or any identification card. Unlike other person-to-person payment systems, the proposed methods do not require the receiving entity to have a bank account, or to perform any registration procedure. We implement our QuickCash payment systems and analyze their security strengths and properties.
1978-04-15
analyst who is concerned with preparing the data base for a war game, selecting optional features of QUICK, designating control parameters, submitting...i/.,-j-t r? 70 ~ CoMPUIfE YsTIEM MANUAL CSM UM 9-77 VOLUME IIIC15 APRIL 1978 Lod COMMAND 9 \\.., & CONTROL 09 TECHNICAL . CENTER CCTC QUICK-REACTING...RECALC Mode ............................... 31 3.1.1.2 Non -RECALC Mode ........................... 31 3.1.1.3 Mode Selecti-n and JCL Consideration
A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs
NASA Astrophysics Data System (ADS)
Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.
2013-12-01
The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.
Electronic Biomedical Literature Search for Budding Researcher
Thakre, Subhash B.; Thakre S, Sushama S.; Thakre, Amol D.
2013-01-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research. PMID:24179937
Electronic biomedical literature search for budding researcher.
Thakre, Subhash B; Thakre S, Sushama S; Thakre, Amol D
2013-09-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research.
Su, Xiaoquan; Xu, Jian; Ning, Kang
2012-10-01
It has long been intriguing scientists to effectively compare different microbial communities (also referred as 'metagenomic samples' here) in a large scale: given a set of unknown samples, find similar metagenomic samples from a large repository and examine how similar these samples are. With the current metagenomic samples accumulated, it is possible to build a database of metagenomic samples of interests. Any metagenomic samples could then be searched against this database to find the most similar metagenomic sample(s). However, on one hand, current databases with a large number of metagenomic samples mostly serve as data repositories that offer few functionalities for analysis; and on the other hand, methods to measure the similarity of metagenomic data work well only for small set of samples by pairwise comparison. It is not yet clear, how to efficiently search for metagenomic samples against a large metagenomic database. In this study, we have proposed a novel method, Meta-Storms, that could systematically and efficiently organize and search metagenomic data. It includes the following components: (i) creating a database of metagenomic samples based on their taxonomical annotations, (ii) efficient indexing of samples in the database based on a hierarchical taxonomy indexing strategy, (iii) searching for a metagenomic sample against the database by a fast scoring function based on quantitative phylogeny and (iv) managing database by index export, index import, data insertion, data deletion and database merging. We have collected more than 1300 metagenomic data from the public domain and in-house facilities, and tested the Meta-Storms method on these datasets. Our experimental results show that Meta-Storms is capable of database creation and effective searching for a large number of metagenomic samples, and it could achieve similar accuracies compared with the current popular significance testing-based methods. Meta-Storms method would serve as a suitable database management and search system to quickly identify similar metagenomic samples from a large pool of samples. ningkang@qibebt.ac.cn Supplementary data are available at Bioinformatics online.
OpenSearch (ECHO-ESIP) & REST API for Earth Science Data Access
NASA Astrophysics Data System (ADS)
Mitchell, A.; Cechini, M.; Pilone, D.
2010-12-01
This presentation will provide a brief technical overview of OpenSearch, the Earth Science Information Partners (ESIP) Federated Search framework, and the REST architecture; discuss NASA’s Earth Observing System (EOS) ClearingHOuse’s (ECHO) implementation lessons learned; and demonstrate the simplified usage of these technologies. SOAP, as a framework for web service communication has numerous advantages for Enterprise applications and Java/C# type programming languages. As a technical solution, SOAP has been a reliable framework on top of which many applications have been successfully developed and deployed. However, as interest grows for quick development cycles and more intriguing “mashups,” the SOAP API loses its appeal. Lightweight and simple are the vogue characteristics that are sought after. Enter the REST API architecture and OpenSearch format. Both of these items provide a new path for application development addressing some of the issues unresolved by SOAP. ECHO has made available all of its discovery, order submission, and data management services through a publicly accessible SOAP API. This interface is utilized by a variety of ECHO client and data partners to provide valuable capabilities to end users. As ECHO interacted with current and potential partners looking to develop Earth Science tools utilizing ECHO, it became apparent that the development overhead required to interact with the SOAP API was a growing barrier to entry. ECHO acknowledged the technical issues that were being uncovered by its partner community and chose to provide two new interfaces for interacting with the ECHO metadata catalog. The first interface is built upon the OpenSearch format and ESIP Federated Search framework. Leveraging these two items, a client (ECHO-ESIP) was developed with a focus on simplified searching and results presentation. The second interface is built upon the Representational State Transfer (REST) architecture. Leveraging the REST architecture, a new API has been made available that will provide access to the entire SOAP API suite of services. The results of these development activities has not only positioned to engage in the thriving world of mashup applications, but also provided an excellent real-world case study of how to successfully leverage these emerging technologies.
An alternative ionospheric correction model for global navigation satellite systems
NASA Astrophysics Data System (ADS)
Hoque, M. M.; Jakowski, N.
2015-04-01
The ionosphere is recognized as a major error source for single-frequency operations of global navigation satellite systems (GNSS). To enhance single-frequency operations the global positioning system (GPS) uses an ionospheric correction algorithm (ICA) driven by 8 coefficients broadcasted in the navigation message every 24 h. Similarly, the global navigation satellite system Galileo uses the electron density NeQuick model for ionospheric correction. The Galileo satellite vehicles (SVs) transmit 3 ionospheric correction coefficients as driver parameters of the NeQuick model. In the present work, we propose an alternative ionospheric correction algorithm called Neustrelitz TEC broadcast model NTCM-BC that is also applicable for global satellite navigation systems. Like the GPS ICA or Galileo NeQuick, the NTCM-BC can be optimized on a daily basis by utilizing GNSS data obtained at the previous day at monitor stations. To drive the NTCM-BC, 9 ionospheric correction coefficients need to be uploaded to the SVs for broadcasting in the navigation message. Our investigation using GPS data of about 200 worldwide ground stations shows that the 24-h-ahead prediction performance of the NTCM-BC is better than the GPS ICA and comparable to the Galileo NeQuick model. We have found that the 95 percentiles of the prediction error are about 16.1, 16.1 and 13.4 TECU for the GPS ICA, Galileo NeQuick and NTCM-BC, respectively, during a selected quiet ionospheric period, whereas the corresponding numbers are found about 40.5, 28.2 and 26.5 TECU during a selected geomagnetic perturbed period. However, in terms of complexity the NTCM-BC is easier to handle than the Galileo NeQuick and in this respect comparable to the GPS ICA.
A PC-based telemetry system for acquiring and reducing data from multiple PCM streams
NASA Astrophysics Data System (ADS)
Simms, D. A.; Butterfield, C. P.
1991-07-01
The Solar Energy Research Institute's (SERI) Wind Research Program is using Pulse Code Modulation (PCM) Telemetry Data-Acquisition Systems to study horizontal-axis wind turbines. Many PCM systems are combined for use in test installations that require accurate measurements from a variety of different locations. SERI has found them ideal for data-acquisition from multiple wind turbines and meteorological towers in wind parks. A major problem has been in providing the capability to quickly combine and examine incoming data from multiple PCM sources in the field. To solve this problem, SERI has developed a low-cost PC-based PCM Telemetry Data-Reduction System (PC-PCM System) to facilitate quick, in-the-field multiple-channel data analysis. The PC-PCM System consists of two basic components. First, PC-compatible hardware boards are used to decode and combine multiple PCM data streams. Up to four hardware boards can be installed in a single PC, which provides the capability to combine data from four PCM streams directly to PC disk or memory. Each stream can have up to 62 data channels. Second, a software package written for use under DOS was developed to simplify data-acquisition control and management. The software, called the Quick-Look Data Management Program, provides a quick, easy-to-use interface between the PC and multiple PCM data streams. The Quick-Look Data Management Program is a comprehensive menu-driven package used to organize, acquire, process, and display information from incoming PCM data streams. The paper describes both hardware and software aspects of the SERI PC-PCM system, concentrating on features that make it useful in an experiment test environment to quickly examine and verify incoming data from multiple PCM streams. Also discussed are problems and techniques associated with PC-based telemetry data-acquisition, processing, and real-time display.
Dynamic taxonomies applied to a web-based relational database for geo-hydrological risk mitigation
NASA Astrophysics Data System (ADS)
Sacco, G. M.; Nigrelli, G.; Bosio, A.; Chiarle, M.; Luino, F.
2012-02-01
In its 40 years of activity, the Research Institute for Geo-hydrological Protection of the Italian National Research Council has amassed a vast and varied collection of historical documentation on landslides, muddy-debris flows, and floods in northern Italy from 1600 to the present. Since 2008, the archive resources have been maintained through a relational database management system. The database is used for routine study and research purposes as well as for providing support during geo-hydrological emergencies, when data need to be quickly and accurately retrieved. Retrieval speed and accuracy are the main objectives of an implementation based on a dynamic taxonomies model. Dynamic taxonomies are a general knowledge management model for configuring complex, heterogeneous information bases that support exploratory searching. At each stage of the process, the user can explore or browse the database in a guided yet unconstrained way by selecting the alternatives suggested for further refining the search. Dynamic taxonomies have been successfully applied to such diverse and apparently unrelated domains as e-commerce and medical diagnosis. Here, we describe the application of dynamic taxonomies to our database and compare it to traditional relational database query methods. The dynamic taxonomy interface, essentially a point-and-click interface, is considerably faster and less error-prone than traditional form-based query interfaces that require the user to remember and type in the "right" search keywords. Finally, dynamic taxonomy users have confirmed that one of the principal benefits of this approach is the confidence of having considered all the relevant information. Dynamic taxonomies and relational databases work in synergy to provide fast and precise searching: one of the most important factors in timely response to emergencies.
Liu, Qiaoxia; Zhou, Binbin; Wang, Xinliang; Ke, Yanxiong; Jin, Yu; Yin, Lihui; Liang, Xinmiao
2012-12-01
A search library about benzylisoquinoline alkaloids was established based on preparation of alkaloid fractions from Rhizoma coptidis, Cortex phellodendri, and Rhizoma corydalis. In this work, two alkaloid fractions from each herbal medicine were first prepared based on selective separation on the "click" binaphthyl column. And then these alkaloid fractions were analyzed on C18 column by liquid chromatography coupled with tandem mass spectrometry. Many structure-related compounds were included in these alkaloids fractions, which led to easy separation and good MS response in further work. Therefore, a search library of 52 benzylisoquinoline alkaloids was established, which included eight aporphine, 19 tetrahydroprotoberberine, two protopine, two benzyltetrahydroisoquinoline, and 21 protoberberine alkaloids. The information of the search library contained compound names, structures, retention times, accurate masses, fragmentation pathways of benzylisoquionline alkaloids, and their sources from three herbal medicines. Using such a library, the alkaloids, especially those trace and unknown components in some herbal medicine could be accurately and quickly identified. In addition, the distribution of benzylisoquinoline alkaloids in the herbal medicines could be also summarized by searching the source samples in the library. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Using ZFIN: Data Types, Organization, and Retrieval.
Van Slyke, Ceri E; Bradford, Yvonne M; Howe, Douglas G; Fashena, David S; Ramachandran, Sridhar; Ruzicka, Leyla
2018-01-01
The Zebrafish Model Organism Database (ZFIN; zfin.org) was established in 1994 as the primary genetic and genomic resource for the zebrafish research community. Some of the earliest records in ZFIN were for people and laboratories. Since that time, services and data types provided by ZFIN have grown considerably. Today, ZFIN provides the official nomenclature for zebrafish genes, mutants, and transgenics and curates many data types including gene expression, phenotypes, Gene Ontology, models of human disease, orthology, knockdown reagents, transgenic constructs, and antibodies. Ontologies are used throughout ZFIN to structure these expertly curated data. An integrated genome browser provides genomic context for genes, transgenics, mutants, and knockdown reagents. ZFIN also supports a community wiki where the research community can post new antibody records and research protocols. Data in ZFIN are accessible via web pages, download files, and the ZebrafishMine (zebrafishmine.org), an installation of the InterMine data warehousing software. Searching for data at ZFIN utilizes both parameterized search forms and a single box search for searching or browsing data quickly. This chapter aims to describe the primary ZFIN data and services, and provide insight into how to use and interpret ZFIN searches, data, and web pages.
Changing viewer perspectives reveals constraints to implicit visual statistical learning.
Jiang, Yuhong V; Swallow, Khena M
2014-10-07
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.
NASA Astrophysics Data System (ADS)
Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Adhikari, R. X.; Affeldt, C.; Agathos, M.; Aggarwal, N.; Aguiar, O. D.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, R. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J.; Ast, S.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Austin, L.; Aylott, B. E.; Babak, S.; Baker, P. T.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barker, D.; Barnum, S. H.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th S.; Bebronne, M.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Bergmann, G.; Berliner, J. M.; Bersanetti, D.; Bertolini, A.; Bessis, D.; Betzwieser, J.; Beyersdorf, P. T.; Bhadbhade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Blom, M.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogan, C.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bowers, J.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brannen, C. A.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brückner, F.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Castiglia, A.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Colombini, M.; Constancio, M., Jr.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coulon, J.-P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Dal Canton, T.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Dayanga, T.; Debreczeni, G.; Degallaix, J.; Deleeuw, E.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R. T.; De Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Díaz, M.; Dietz, A.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Dmitry, K.; Donovan, F.; Dooley, K. L.; Doravari, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edwards, M.; Effler, A.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; EndrHoczi, G.; Essick, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R.; Flaminio, R.; Foley, E.; Foley, S.; Forsi, E.; Fotopoulos, N.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P.; Fyffe, M.; Gair, J.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Groot, P.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B.; Hall, E.; Hammer, D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haughian, K.; Hayama, K.; Heefner, J.; Heidmann, A.; Heintze, M.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Hong, T.; Hooper, S.; Horrom, T.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hu, Y.; Hua, Z.; Huang, V.; Huerta, E. A.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Iafrate, J.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Iyer, B. R.; Izumi, K.; Jacobson, M.; James, E.; Jang, H.; Jang, Y. J.; Jaranowski, P.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Koehlenbeck, S.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kremin, A.; Kringel, V.; Krishnan, B.; Królak, A.; Kucharczyk, C.; Kudla, S.; Kuehn, G.; Kumar, A.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Landry, M.; Lantz, B.; Larson, S.; Lasky, P. D.; Lawrie, C.; Leaci, P.; Lebigot, E. O.; Lee, C.-H.; Lee, H. K.; Lee, H. M.; Lee, J.; Lee, J.; Leonardi, M.; Leong, J. R.; Le Roux, A.; Leroy, N.; Letendre, N.; Levine, B.; Lewis, J. B.; Lhuillier, V.; Li, T. G. F.; Lin, A. C.; Littenberg, T. B.; Litvine, V.; Liu, F.; Liu, H.; Liu, Y.; Liu, Z.; Lloyd, D.; Lockerbie, N. A.; Lockett, V.; Lodhia, D.; Loew, K.; Logue, J.; Lombardi, A. L.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Luan, J.; Lubinski, M. J.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Manca, G. M.; Mandel, I.; Mandic, V.; Mangano, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Martinelli, L.; Martynov, D.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; May, G.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Meier, T.; Melatos, A.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Mikhailov, E. E.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Mokler, F.; Moraru, D.; Moreno, G.; Morgado, N.; Mori, T.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nagy, M. F.; Nanda Kumar, D.; Nardecchia, I.; Nash, T.; Naticchioni, L.; Nayak, R.; Necula, V.; Nelemans, G.; Neri, I.; Neri, M.; Newton, G.; Nguyen, T.; Nishida, E.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oppermann, P.; O'Reilly, B.; Ortega Larcher, W.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Ou, J.; Overmier, H.; Owen, B. J.; Padilla, C.; Pai, A.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Paris, H.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Peiris, P.; Penn, S.; Perreca, A.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pinard, L.; Pindor, B.; Pinto, I. M.; Pitkin, M.; Poeld, J.; Poggiani, R.; Poole, V.; Poux, C.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quintero, E.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Raja, S.; Rajalakshmi, G.; Rakhmanov, M.; Ramet, C.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Robertson, N. A.; Robinet, F.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Roever, C.; Rolland, L.; Rollins, J. G.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sancho de la Jordana, L.; Sandberg, V.; Sanders, J.; Sannibale, V.; Santiago-Prieto, I.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schreiber, E.; Schuette, D.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siellez, K.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Soden, K.; Son, E. J.; Sorazu, B.; Souradeep, T.; Sperandio, L.; Staley, A.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stevens, D.; Stochino, A.; Stone, R.; Strain, K. A.; Straniero, N.; Strigin, S.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Talukder, D.; Tang, L.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thirugnanasambandam, M. P.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Unnikrishnan, C. S.; Vahlbruch, H.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Heijningen, J.; van Veggel, A. A.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Verma, S.; Vetrano, F.; Viceré, A.; Vincent-Finley, R.; Vinet, J.-Y.; Vitale, S.; Vlcek, B.; Vo, T.; Vocca, H.; Vorvick, C.; Vousden, W. D.; Vrinceanu, D.; Vyachanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Walker, M.; Wallace, L.; Wan, Y.; Wang, J.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wibowo, S.; Wiesner, K.; Wilkinson, C.; Williams, L.; Williams, R.; Williams, T.; Willis, J. L.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yum, H.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zhu, H.; Zhu, X. J.; Zotov, N.; Zucker, M. E.; Zweizig, J.
2014-04-01
We report on an all-sky search for periodic gravitational waves in the frequency range 50-1000 Hz with the first derivative of frequency in the range -8.9 × 10-10 Hz s-1 to zero in two years of data collected during LIGO’s fifth science run. Our results employ a Hough transform technique, introducing a χ2 test and analysis of coincidences between the signal levels in years 1 and 2 of observations that offers a significant improvement in the product of strain sensitivity with compute cycles per data sample compared to previously published searches. Since our search yields no surviving candidates, we present results taking the form of frequency dependent, 95% confidence upper limits on the strain amplitude h0. The most stringent upper limit from year 1 is 1.0 × 10-24 in the 158.00-158.25 Hz band. In year 2, the most stringent upper limit is 8.9 × 10-25 in the 146.50-146.75 Hz band. This improved detection pipeline, which is computationally efficient by at least two orders of magnitude better than our flagship Einstein@Home search, will be important for ‘quick-look’ searches in the Advanced LIGO and Virgo detector era.
Air Force Reusable Booster System A Quick-look, Design Focused Modeling and Cost Analysis Study
NASA Technical Reports Server (NTRS)
Zapata, Edgar
2011-01-01
Presents work supporting the Air force Reusable Booster System (RBS) - A Cost Study with Goals as follows: Support US launch systems decision makers, esp. in regards to the research, technology and demonstration investments required for reusable systems to succeed. Encourage operable directions in Reusable Booster / Launch Vehicle Systems technology choices, system design and product and process developments. Perform a quick-look cost study, while developing a cost model for more refined future analysis.
A System for Heart Sounds Classification
Redlarski, Grzegorz; Gradolewski, Dawid; Palkowski, Aleksander
2014-01-01
The future of quick and efficient disease diagnosis lays in the development of reliable non-invasive methods. As for the cardiac diseases – one of the major causes of death around the globe – a concept of an electronic stethoscope equipped with an automatic heart tone identification system appears to be the best solution. Thanks to the advancement in technology, the quality of phonocardiography signals is no longer an issue. However, appropriate algorithms for auto-diagnosis systems of heart diseases that could be capable of distinguishing most of known pathological states have not been yet developed. The main issue is non-stationary character of phonocardiography signals as well as a wide range of distinguishable pathological heart sounds. In this paper a new heart sound classification technique, which might find use in medical diagnostic systems, is presented. It is shown that by combining Linear Predictive Coding coefficients, used for future extraction, with a classifier built upon combining Support Vector Machine and Modified Cuckoo Search algorithm, an improvement in performance of the diagnostic system, in terms of accuracy, complexity and range of distinguishable heart sounds, can be made. The developed system achieved accuracy above 93% for all considered cases including simultaneous identification of twelve different heart sound classes. The respective system is compared with four different major classification methods, proving its reliability. PMID:25393113
Algorithms for Automated DNA Assembly
2010-01-01
polyketide synthase gene cluster. Proc. Natl Acad. Sci. USA, 101, 15573–15578. 16. Shetty,R.P., Endy,D. and Knight,T.F. Jr (2008) Engineering BioBrick vectors...correct theoretical construction scheme is de- veloped manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and...to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with
Intellectual Issues in the History of Artificial Intelligence
1982-10-28
the history of science is in terms of important scientific events and discoveries, linked to and by the scientists who were responsible for them...knowledge (coupled, sometimes, with modest search). However, as is usual in the history of science , work on powerful Al programs never stopped; it only...for AL, perhaps, because the nature of mind seems i,* .othe quick. But the history of science reminds us easily 0enough that at various stages
Why Was General Richard O’Connor’s Command in Northwest Europe Less Effective Than Expected?
2011-03-01
collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources ...this source of oil the British war effort would quickly grind to a halt. The region gained further importance because of the advantage it offered...considered impassable to armored formations.88 After heavy fighting on ground that suited a dismounted infantry defense, the Australian Division
The Use of Metadata Visualisation Assist Information Retrieval
2007-10-01
album title, the track length and the genre of music . Again, any of these pieces of information can be used to quickly search and locate specific...that person. Music files also have metadata tags, in a format called ID3. This usually contains information such as the artist, the song title, the...tracks, to provide more information about the entire music collection, or to find similar or diverse tracks within the collection. Metadata is
Authorities and Options for Funding USSOCOM Operations
2014-01-01
Operations and Maintenance funds to anticipate emergent SOF sup- port funding requirements. The MILDEPs have some flexibility to make “fact of life ...but does not direct a specific funding pathway) USSOCOM directs a stop to use of MFP-11 funding for TSOC HQ suport . PBD realigns MFP-11 and MFP-2...planning advances relatively quickly, while the search for funding lags. The planning also has a dynamic quality, so the Statement of Requirements
NASA Astrophysics Data System (ADS)
Dabolt, T. O.
2016-12-01
The proliferation of open data and data services continues to thrive and is creating new challenges on how researchers, policy analysts and other decision makes can quickly discover and use relevant data. While traditional metadata catalog approaches used by applications such as data.gov prove to be useful starting points for data search they can quickly frustrate end users who are seeking ways to quickly find and then use data in machine to machine environs. The Geospatial Platform is overcoming these obstacles and providing end users and applications developers a richer more productive user experience. The Geospatial Platform leverages a collection of open source and commercial technology hosted on Amazon Web Services providing an ecosystem of services delivering trusted, consistent data in open formats to all users as well as a shared infrastructure for federal partners to serve their spatial data assets. It supports a diverse array of communities of practice ranging on topics from the 16 National Geospatial Data Assets Themes, to homeland security and climate adaptation. Come learn how you can contribute your data and leverage others or check it out on your own at https://www.geoplatform.gov/
Kash, Melissa J
2016-01-01
In an era where physicians rely on point-of-care databases that provide filtered, pre-appraised, and quickly accessible clinical information by smartphone applications, it is difficult to teach medical students the importance of knowing not only when it is appropriate to search the primary medical literature but also how to do it. This column will describe how librarians at an academic health sciences library use an unusual clinical case to make demonstrations of searching primary medical literature real and meaningful to medical students, and to illustrate vividly the importance of knowing what to do when the answer to a clinical question cannot be found in a point-of-care database.
Liu, Lei; Zhao, Jing
2014-01-01
An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users. PMID:24790579
A rapid and rational approach to generating isomorphous heavy-atom phasing derivatives
Lu, Jinghua; Sun, Peter D.
2014-01-01
In attempts to replace the conventional trial-and-error heavy-atom derivative search method with a rational approach, we previously defined heavy metal compound reactivity against peptide ligands. Here, we assembled a composite pH and buffer-dependent peptide reactivity profile for each heavy metal compound to guide rational heavy-atom derivative search. When knowledge of the best-reacting heavy-atom compound is combined with mass spectrometry-assisted derivatization, and with a quick-soak method to optimize phasing, it is likely that the traditional heavy-atom compounds could meet the demand of modern high-throughput X-ray crystallography. As an example, we applied this rational heavy-atom phasing approach to determine a previously unknown mouse serum amyloid A2 crystal structure. PMID:25040395
Searching for Variable Stars in the SDSS Calibration Fields (Abstract)
NASA Astrophysics Data System (ADS)
Smith, J. A.; Butner, M.; Tucker, D.; Allam, S.
2018-06-01
(Abstract only) We are searching the Sloan Digital Sky Survey (SDSS) calibration fields for variable stars. This long neglected data set, taken with a 0.5-m telescope, contains nearly 200,000 stars in more than 100 fields which were observed over the course of 8+ years during the observing portion of the SDSS-I and SDSS-II surveys. During the course of the survey, each field was visited from 10 to several thousand times, so our initial pass is just to identify potential variable stars. Our initial "quick-look" effort shows several thousand potential candidates and includes at least one nearby supernova. We present our plans for a follow-up observational program for further identification of variable types and period determinations.
Zhong, Cheng; Liu, Lei; Zhao, Jing
2014-01-01
An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users.
Crowdsourcing: Global search and the twisted roles of consumers and producers.
Bauer, Robert M; Gegenhuber, Thomas
2015-09-01
Crowdsourcing spreads and morphs quickly, shaping areas as diverse as creating, organizing, and sharing knowledge; producing digital artifacts; providing services involving tangible assets; or monitoring and evaluating. Crowdsourcing as sourcing by means of 'global search' yields four types of values for sourcing actors: creative expertise, critical items, execution capacity, and bargaining power. It accesses cheap excess capacities at the work realm's margins, channeling them toward production. Provision and utilization of excess capacities rationalize society while intimately connecting to a broader societal trend twisting consumers' and producers' roles: leading toward 'working consumers' and 'consuming producers' and shifting power toward the latter. Similarly, marketers using crowdsourcing's look and feel to camouflage traditional approaches to bringing consumers under control preserve producer power.
Human factors involvement in bringing the power of AI to a heterogeneous user population
NASA Technical Reports Server (NTRS)
Czerwinski, Mary; Nguyen, Trung
1994-01-01
The Human Factors involvement in developing COMPAQ QuickSolve, an electronic problem-solving and information system for Compaq's line of networked printers, is described. Empowering customers with expert system technology so they could solve advanced networked printer problems on their own was a major goal in designing this system. This process would minimize customer down-time, reduce the number of phone calls to the Compaq Customer Support Center, improve customer satisfaction, and, most importantly, differentiate Compaq printers in the marketplace by providing the best, and most technologically advanced, customer support. This represents a re-engineering of Compaq's customer support strategy and implementation. In its first generation system, SMART, the objective was to provide expert knowledge to Compaq's help desk operation to more quickly and correctly answer customer questions and problems. QuickSolve is a second generation system in that customer support is put directly in the hands of the consumers. As a result, the design of QuickSolve presented a number of challenging issues. Because the produce would be used by a diverse and heterogeneous set of users, a significant amount of human factors research and analysis was required while designing and implementing the system. Research that shaped the organization and design of the expert system component as well.
The on-site quality-assurance system for Hyper Suprime-Cam: OSQAH
NASA Astrophysics Data System (ADS)
Furusawa, Hisanori; Koike, Michitaro; Takata, Tadafumi; Okura, Yuki; Miyatake, Hironao; Lupton, Robert H.; Bickerton, Steven; Price, Paul A.; Bosch, James; Yasuda, Naoki; Mineo, Sogo; Yamada, Yoshihiko; Miyazaki, Satoshi; Nakata, Fumiaki; Koshida, Shintaro; Komiyama, Yutaka; Utsumi, Yousuke; Kawanomoto, Satoshi; Jeschke, Eric; Noumaru, Junichi; Schubert, Kiaina; Iwata, Ikuru; Finet, Francois; Fujiyoshi, Takuya; Tajitsu, Akito; Terai, Tsuyoshi; Lee, Chien-Hsiu
2018-01-01
We have developed an automated quick data analysis system for data quality assurance (QA) for Hyper Suprime-Cam (HSC). The system was commissioned in 2012-2014, and has been offered for general observations, including the HSC Subaru Strategic Program, since 2014 March. The system provides observers with data quality information, such as seeing, sky background level, and sky transparency, based on quick analysis as data are acquired. Quick-look images and validation of image focus are also provided through an interactive web application. The system is responsible for the automatic extraction of QA information from acquired raw data into a database, to assist with observation planning, assess progress of all observing programs, and monitor long-term efficiency variations of the instrument and telescope. Enhancements of the system are being planned to facilitate final data analysis, to improve the HSC archive, and to provide legacy products for astronomical communities.
NASA Technical Reports Server (NTRS)
Rui, Hualan; Vollmer, Bruce; Teng, Bill; Jasinski, Michael; Mocko, David; Loeser, Carlee; Kempler, Steven
2016-01-01
The National Climate Assessment-Land Data Assimilation System (NCA-LDAS) is an Integrated Terrestrial Water Analysis, and is one of NASAs contributions to the NCA of the United States. The NCA-LDAS has undergone extensive development, including multi-variate assimilation of remotely-sensed water states and anomalies as well as evaluation and verification studies, led by the Goddard Space Flight Centers Hydrological Sciences Laboratory (HSL). The resulting NCA-LDAS data have recently been released to the general public and include those from the Noah land-surface model (LSM) version 3.3 (Noah-3.3) and the Catchment LSM version Fortuna-2.5 (CLSM-F2.5). Standard LSM output variables including soil moistures temperatures, surface fluxes, snow cover depth, groundwater, and runoff are provided, as well as streamflow using a river routing system. The NCA-LDAS data are archived at and distributed by the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). The data can be accessed via HTTP, OPeNDAP, Mirador search and download, and NASA Earth data Search. To further facilitate access and use, the NCA-LDAS data are integrated into the NASA Giovanni, for quick visualization and analysis, and into the Data Rods system, for retrieval of time series of long time periods. The temporal and spatial resolutions of the NCA-LDAS data are, respectively, daily-averages and 0.125x0.125 degree, covering North America (25N 53N; 125W 67W) and the period January 1979 to December 2015. The data files are in self-describing, machine-independent, CF-compliant netCDF-4 format.
Expert system for computer-assisted annotation of MS/MS spectra.
Neuhauser, Nadin; Michalski, Annette; Cox, Jürgen; Mann, Matthias
2012-11-01
An important step in mass spectrometry (MS)-based proteomics is the identification of peptides by their fragment spectra. Regardless of the identification score achieved, almost all tandem-MS (MS/MS) spectra contain remaining peaks that are not assigned by the search engine. These peaks may be explainable by human experts but the scale of modern proteomics experiments makes this impractical. In computer science, Expert Systems are a mature technology to implement a list of rules generated by interviews with practitioners. We here develop such an Expert System, making use of literature knowledge as well as a large body of high mass accuracy and pure fragmentation spectra. Interestingly, we find that even with high mass accuracy data, rule sets can quickly become too complex, leading to over-annotation. Therefore we establish a rigorous false discovery rate, calculated by random insertion of peaks from a large collection of other MS/MS spectra, and use it to develop an optimized knowledge base. This rule set correctly annotates almost all peaks of medium or high abundance. For high resolution HCD data, median intensity coverage of fragment peaks in MS/MS spectra increases from 58% by search engine annotation alone to 86%. The resulting annotation performance surpasses a human expert, especially on complex spectra such as those of larger phosphorylated peptides. Our system is also applicable to high resolution collision-induced dissociation data. It is available both as a part of MaxQuant and via a webserver that only requires an MS/MS spectrum and the corresponding peptides sequence, and which outputs publication quality, annotated MS/MS spectra (www.biochem.mpg.de/mann/tools/). It provides expert knowledge to beginners in the field of MS-based proteomics and helps advanced users to focus on unusual and possibly novel types of fragment ions.
Expert System for Computer-assisted Annotation of MS/MS Spectra*
Neuhauser, Nadin; Michalski, Annette; Cox, Jürgen; Mann, Matthias
2012-01-01
An important step in mass spectrometry (MS)-based proteomics is the identification of peptides by their fragment spectra. Regardless of the identification score achieved, almost all tandem-MS (MS/MS) spectra contain remaining peaks that are not assigned by the search engine. These peaks may be explainable by human experts but the scale of modern proteomics experiments makes this impractical. In computer science, Expert Systems are a mature technology to implement a list of rules generated by interviews with practitioners. We here develop such an Expert System, making use of literature knowledge as well as a large body of high mass accuracy and pure fragmentation spectra. Interestingly, we find that even with high mass accuracy data, rule sets can quickly become too complex, leading to over-annotation. Therefore we establish a rigorous false discovery rate, calculated by random insertion of peaks from a large collection of other MS/MS spectra, and use it to develop an optimized knowledge base. This rule set correctly annotates almost all peaks of medium or high abundance. For high resolution HCD data, median intensity coverage of fragment peaks in MS/MS spectra increases from 58% by search engine annotation alone to 86%. The resulting annotation performance surpasses a human expert, especially on complex spectra such as those of larger phosphorylated peptides. Our system is also applicable to high resolution collision-induced dissociation data. It is available both as a part of MaxQuant and via a webserver that only requires an MS/MS spectrum and the corresponding peptides sequence, and which outputs publication quality, annotated MS/MS spectra (www.biochem.mpg.de/mann/tools/). It provides expert knowledge to beginners in the field of MS-based proteomics and helps advanced users to focus on unusual and possibly novel types of fragment ions. PMID:22888147
Smartphone Applications for Promoting Healthy Diet and Nutrition: A Literature Review.
Coughlin, Steven S; Whitehead, Mary; Sheats, Joyce Q; Mastromonico, Jeff; Hardy, Dale; Smith, Selina A
Rapid developments in technology have encouraged the use of smartphones in health promotion research and practice. Although many applications (apps) relating to diet and nutrition are available from major smartphone platforms, relatively few have been tested in research studies in order to determine their effectiveness in promoting health. In this article, we summarize data on the use of smartphone applications for promoting healthy diet and nutrition based upon bibliographic searches in PubMed and CINAHL with relevant search terms pertaining to diet, nutrition, and weight loss through August 2015. A total of 193 articles were identified in the bibliographic searches. By screening abstracts or full-text articles, a total of three relevant qualitative studies and 9 randomized controlled trials were identified. In qualitative studies, participants preferred applications that were quick and easy to administer, and those that increase awareness of food intake and weight management. In randomized trials, the use of smartphone apps was associated with better dietary compliance for lower calorie, low fat, and high fiber foods, and higher physical activity levels (p=0.01-0.02) which resulted in more weight loss (p=0.042-<0.0001). Future studies should utilize randomized controlled trial research designs, larger sample sizes, and longer study periods to better establish the diet and nutrition intervention capabilities of smartphones. There is a need for culturally appropriate, tailored health messages to increase knowledge and awareness of health behaviors such as healthy eating. Smartphone apps are likely to be a useful and low-cost intervention for improving diet and nutrition and addressing obesity in the general population. Participants prefer applications that are quick and easy to administer and those that increase awareness of food intake and weight management.
Smartphone Applications for Promoting Healthy Diet and Nutrition: A Literature Review
Coughlin, Steven S.; Whitehead, Mary; Sheats, Joyce Q.; Mastromonico, Jeff; Hardy, Dale; Smith, Selina A.
2015-01-01
Background Rapid developments in technology have encouraged the use of smartphones in health promotion research and practice. Although many applications (apps) relating to diet and nutrition are available from major smartphone platforms, relatively few have been tested in research studies in order to determine their effectiveness in promoting health. Methods In this article, we summarize data on the use of smartphone applications for promoting healthy diet and nutrition based upon bibliographic searches in PubMed and CINAHL with relevant search terms pertaining to diet, nutrition, and weight loss through August 2015. Results A total of 193 articles were identified in the bibliographic searches. By screening abstracts or full-text articles, a total of three relevant qualitative studies and 9 randomized controlled trials were identified. In qualitative studies, participants preferred applications that were quick and easy to administer, and those that increase awareness of food intake and weight management. In randomized trials, the use of smartphone apps was associated with better dietary compliance for lower calorie, low fat, and high fiber foods, and higher physical activity levels (p=0.01-0.02) which resulted in more weight loss (p=0.042-<0.0001). Discussion Future studies should utilize randomized controlled trial research designs, larger sample sizes, and longer study periods to better establish the diet and nutrition intervention capabilities of smartphones. There is a need for culturally appropriate, tailored health messages to increase knowledge and awareness of health behaviors such as healthy eating. Smartphone apps are likely to be a useful and low-cost intervention for improving diet and nutrition and addressing obesity in the general population. Participants prefer applications that are quick and easy to administer and those that increase awareness of food intake and weight management. PMID:26819969
Dick, Amanda A; Harlow, Timothy J; Gogarten, J Peter
2017-02-01
Long Branch Attraction (LBA) is a well-known artifact in phylogenetic reconstruction when dealing with branch length heterogeneity. Here we show another phenomenon, Short Branch Attraction (SBA), which occurs when BLAST searches, a phenetic analysis, are used as a surrogate method for phylogenetic analysis. This error also results from branch length heterogeneity, but this time it is the short branches that are attracting. The SBA artifact is reciprocal and can be returned 100% of the time when multiple branches differ in length by a factor of more than two. SBA is an intended feature of BLAST searches, but becomes an issue, when top scoring BLAST hit analyses are used to infer Horizontal Gene Transfers (HGTs), assign taxonomic category with environmental sequence data in phylotyping, or gather homologous sequences for building gene families. SBA can lead researchers to believe that there has been a HGT event when only vertical descent has occurred, cause slowly evolving taxa to be over-represented and quickly evolving taxa to be under-represented in phylotyping, or systematically exclude quickly evolving taxa from analyses. SBA also contributes to the changing results of top scoring BLAST hit analyses as the database grows, because more slowly evolving taxa, or short branches, are added over time, introducing more potential for SBA. SBA can be detected by examining reciprocal best BLAST hits among a larger group of taxa, including the known closest phylogenetic neighbors. Therefore, one should look for this phenomenon when conducting best BLAST hit analyses as a surrogate method to identify HGTs, in phylotyping, or when using BLAST to gather homologous sequences. Copyright © 2016 Elsevier Inc. All rights reserved.
Futamura, Masaki; Thomas, Kim S.; Grindlay, Douglas J. C.; Doney, Elizabeth J.; Torley, Donna; Williams, Hywel C.
2013-01-01
Background Many research studies have been published on atopic eczema and these are often summarised in systematic reviews (SRs). Identifying SRs can be time-consuming for health professionals, and researchers. In order to facilitate the identification of important research, we have compiled an on-line resource that includes all relevant eczema reviews published since 2000. Methods SRs were searched for in MEDLINE (Ovid), EMBASE (Ovid), PubMed, the Cochrane Database of Systematic Reviews, DARE and NHS Evidence. Selected SRs were assessed against the pre-defined eligibility criteria and relevant articles were grouped by treatment category for the included interventions. All identified systematic reviews are included in the Global Resource of EczemA Trials (GREAT) database (www.greatdatabase.org.uk) and key clinical messages are summarised here. Results A total of 128 SRs reviews were identified, including three clinical guidelines. Of these, 46 (36%) were found in the Cochrane Library. No single database contained all of the SRs found. The number of SRs published per year has increased substantially over the last thirteen years, and reviews were published in a variety of clinical journals. Of the 128 SRs, 1 (1%) was on mechanism, 37 (29%) were on epidemiology, 40 (31%) were on eczema prevention, 29 (23%) were on topical treatments, 31 (24%) were on systemic treatments, and 24 (19%) were on other treatments. All SRs included searches of MEDLINE in their search methods. One hundred six SRs (83%) searched more than one electronic database. There were no language restrictions reported in the search methods of 52 of the SRs (41%). Conclusions This mapping of atopic eczema reviews is a valuable resource. It will help healthcare practitioners, guideline writers, information specialists, and researchers to quickly identify relevant up-to-date evidence in the field for improving patient care. PMID:23505516
Canning, Claire Ann; Loe, Alan; Cockett, Kathryn Jane; Gagnon, Paul; Zary, Nabil
2017-01-01
Curriculum Mapping and dynamic visualization is quickly becoming an integral aspect of quality improvement in support of innovations which drive curriculum quality assurance processes in medical education. CLUE (Curriculum Explorer) a highly interactive, engaging and independent platform was developed to support curriculum transparency, enhance student engagement, and enable granular search and display. Reflecting a design based approach to meet the needs of the school's varied stakeholders, CLUE employs an iterative and reflective approach to drive the evolution of its platform, as it seeks to accommodate the ever-changing needs of our stakeholders in the fast pace world of medicine and medical education today. CLUE exists independent of institutional systems and in this way, is uniquely positioned to deliver a data driven quality improvement resource, easily adaptable for use by any member of our health care professions.
Hernández, P; Dorado, G; Ramírez, M C; Laurie, D A; Snape, J W; Martín, A
2003-01-01
Hordeum chilense is a potential source of useful genes for wheat breeding. The use of this wild species to increase genetic variation in wheat will be greatly facilitated by marker-assisted introgression. In recent years, the search for the most suitable DNA marker system for tagging H. chilense genomic regions in a wheat background has lead to the development of RAPD and SCAR markers for this species. RAPDs represent an easy way of quickly generating suitable introgression markers, but their use is limited in heterogeneous wheat genetic backgrounds. SCARs are more specific assays, suitable for automatation or multiplexing. Direct sequencing of RAPD products is a cost-effective approach that reduces labour and costs for SCAR development. The use of SSR and STS primers originally developed for wheat and barley are additional sources of genetic markers. Practical applications of the different marker approaches for obtaining derived introgression products are described.
Attending to items in working memory: Evidence that refreshing and memory search are closely related
Vergauwe, Evie; Cowan, Nelson
2014-01-01
Refreshing refers to the use of attention to reactivate items in working memory (WM). The current study aims at testing the hypothesis that refreshing is closely related to memory search. The assumption is that refreshing and memory search both rely on a basic covert memory process that quickly retrieves the memory items into the focus of attention, thereby reactivating the information (Cowan, 1992; Vergauwe & Cowan, 2014). Consistent with the idea that people use their attention to prevent loss from WM, previous research has shown that increasing the proportion of time during which attention is occupied by concurrent processing, thereby preventing refreshing, results in poorer recall performance in complex span tasks (Barrouillet, Portrat, & Camos, 2011). Here, we tested whether recall performance is differentially affected by prolonged attentional capture caused by memory search. If memory search and refreshing both rely on retrieval from WM, then prolonged attentional capture caused by memory search should not lead to forgetting because memory items are assumed to be reactivated during memory search, in the same way as they would if that period of time were to be used for refreshing. Consistent with this idea, prolonged attentional capture had a disruptive effect when it was caused by the need to retrieve knowledge from long-term memory but not when it was caused by the need to search through the content of WM. The current results support the idea that refreshing operates through a process of retrieval of information into the focus of attention. PMID:25361821
Sauer, Ursula G; Wächter, Thomas; Hareng, Lars; Wareing, Britta; Langsch, Angelika; Zschunke, Matthias; Alvers, Michael R; Landsiedel, Robert
2014-06-01
The knowledge-based search engine Go3R, www.Go3R.org, has been developed to assist scientists from industry and regulatory authorities in collecting comprehensive toxicological information with a special focus on identifying available alternatives to animal testing. The semantic search paradigm of Go3R makes use of expert knowledge on 3Rs methods and regulatory toxicology, laid down in the ontology, a network of concepts, terms, and synonyms, to recognize the contents of documents. Search results are automatically sorted into a dynamic table of contents presented alongside the list of documents retrieved. This table of contents allows the user to quickly filter the set of documents by topics of interest. Documents containing hazard information are automatically assigned to a user interface following the endpoint-specific IUCLID5 categorization scheme required, e.g. for REACH registration dossiers. For this purpose, complex endpoint-specific search queries were compiled and integrated into the search engine (based upon a gold standard of 310 references that had been assigned manually to the different endpoint categories). Go3R sorts 87% of the references concordantly into the respective IUCLID5 categories. Currently, Go3R searches in the 22 million documents available in the PubMed and TOXNET databases. However, it can be customized to search in other databases including in-house databanks. Copyright © 2013 Elsevier Ltd. All rights reserved.
Quick Attach Docking Interface for Lunar Electric Rover
NASA Technical Reports Server (NTRS)
Schuler, Jason M.; Nick, Andrew J.; Immer, Christopher; Mueller, Robert P.
2010-01-01
The NASA Lunar Electric Rover (LER) has been developed at Johnson Space Center as a next generation mobility platform. Based upon a twelve wheel omni-directional chassis with active suspension the LER introduces a number of novel capabilities for lunar exploration in both manned and unmanned scenarios. Besides being the primary vehicle for astronauts on the lunar surface, LER will perform tasks such as lunar regolith handling (to include dozing, grading, and excavation), equipment transport, and science operations. In an effort to support these additional tasks a team at the Kennedy Space Center has produced a universal attachment interface for LER known as the Quick Attach. The Quick Attach is a compact system that has been retro-fitted to the rear of the LER giving it the ability to dock and undock on the fly with various implements. The Quick Attach utilizes a two stage docking approach; the first is a mechanical mate which aligns and latches a passive set of hooks on an implement with an actuated cam surface on LER. The mechanical stage is tolerant to misalignment between the implement and the LER during docking and once the implement is captured a preload is applied to ensure a positive lock. The second stage is an umbilical connection which consists of a dust resistant enclosure housing a compliant mechanism that is optionally actuated to mate electrical and fluid connections for suitable implements. The Quick Attach system was designed with the largest foreseen input loads considered including excavation operations and large mass utility attachments. The Quick Attach system was demonstrated at the Desert Research And Technology Studies (D-RA TS) field test in Flagstaff, AZ along with the lightweight dozer blade LANCE. The LANCE blade is the first implement to utilize the Quick Attach interface and demonstrated the tolerance, speed, and strength of the system in a lunar analog environment.
[Rapid identification of hogwash oil by using synchronous fluorescence spectroscopy].
Sun, Yan-Hui; An, Hai-Yang; Jia, Xiao-Li; Wang, Juan
2012-10-01
To identify hogwash oil quickly, the characteristic delta lambda of hogwash oil was analyzed by three dimensional fluorescence spectroscopy with parallel factor analysis, and the model was built up by using synchronous fluorescence spectroscopy with support vector machines (SVM). The results showed that the characteristic delta lambda of hogwash oil was 60 nm. Collecting original spectrum of different samples under the condition of characteristic delta lambda 60 nm, the best model was established while 5 principal components were selected from original spectrum and the radial basis function (RBF) was used as the kernel function, and the optimal penalty factor C and kernel function g were 512 and 0.5 respectively obtained by the grid searching and 6-fold cross validation. The discrimination rate of the model was 100% for both training sets and prediction sets. Thus, it is quick and accurate to apply synchronous fluorescence spectroscopy to identification of hogwash oil.
NASA Astrophysics Data System (ADS)
Tsukada, Leo; Cannon, Kipp; Hanna, Chad; Keppel, Drew; Meacher, Duncan; Messick, Cody
2018-05-01
Joint electromagnetic and gravitational-wave (GW) observation is a major goal of both the GW astronomy and electromagnetic astronomy communities for the coming decade. One way to accomplish this goal is to direct follow-up of GW candidates. Prompt electromagnetic emission may fade quickly, therefore it is desirable to have GW detection happen as quickly as possible. A leading source of latency in GW detection is the whitening of the data. We examine the performance of a zero-latency whitening filter in a detection pipeline for compact binary coalescence (CBC) GW signals. We find that the filter reproduces signal-to-noise ratio (SNR) sufficiently consistent with the results of the original high-latency and phase-preserving filter for both noise and artificial GW signals (called "injections"). Additionally, we demonstrate that these two whitening filters show excellent agreement in χ2 value, a discriminator for GW signals.
NASA Technical Reports Server (NTRS)
Quach, William L.; Sesplaukis, Tadas; Owen-Mankovich, Kyran J.; Nakamura, Lori L.
2012-01-01
WMD provides a centralized interface to access data stored in the Mission Data Processing and Control System (MPCS) GDS (Ground Data Systems) databases during MSL (Mars Science Laboratory) Testbeds and ATLO (Assembly, Test, and Launch Operations) test sessions. The MSL project organizes its data based on venue (Testbed, ATLO, Ops), with each venue's data stored on a separate database, making it cumbersome for users to access data across the various venues. WMD allows sessions to be retrieved through a Web-based search using several criteria: host name, session start date, or session ID number. Sessions matching the search criteria will be displayed and users can then select a session to obtain and analyze the associated data. The uniqueness of this software comes from its collection of data retrieval and analysis features provided through a single interface. This allows users to obtain their data and perform the necessary analysis without having to worry about where and how to get the data, which may be stored in various locations. Additionally, this software is a Web application that only requires a standard browser without additional plug-ins, providing a cross-platform, lightweight solution for users to retrieve and analyze their data. This software solves the problem of efficiently and easily finding and retrieving data from thousands of MSL Testbed and ATLO sessions. WMD allows the user to retrieve their session in as little as one mouse click, and then to quickly retrieve additional data associated with the session.
Efficient strategies to find diagnostic test accuracy studies in kidney journals.
Rogerson, Thomas E; Ladhani, Maleeka; Mitchell, Ruth; Craig, Jonathan C; Webster, Angela C
2015-08-01
Nephrologists looking for quick answers to diagnostic clinical questions in MEDLINE can use a range of published search strategies or Clinical Query limits to improve the precision of their searches. We aimed to evaluate existing search strategies for finding diagnostic test accuracy studies in nephrology journals. We assessed the accuracy of 14 search strategies for retrieving diagnostic test accuracy studies from three nephrology journals indexed in MEDLINE. Two investigators hand searched the same journals to create a reference set of diagnostic test accuracy studies to compare search strategy results against. We identified 103 diagnostic test accuracy studies, accounting for 2.1% of all studies published. The most specific search strategy was the Narrow Clinical Queries limit (sensitivity: 0.20, 95% CI 0.13-0.29; specificity: 0.99, 95% CI 0.99-0.99). Using the Narrow Clinical Queries limit, a searcher would need to screen three (95% CI 2-6) articles to find one diagnostic study. The most sensitive search strategy was van der Weijden 1999 Extended (sensitivity: 0.95; 95% CI 0.89-0.98; specificity 0.55, 95% CI 0.53-0.56) but required a searcher to screen 24 (95% CI 23-26) articles to find one diagnostic study. Bachmann 2002 was the best balanced search strategy, which was sensitive (0.88, 95% CI 0.81-0.94), but also specific (0.74, 95% CI 0.73-0.75), with a number needed to screen of 15 (95% CI 14-17). Diagnostic studies are infrequently published in nephrology journals. The addition of a strategy for diagnostic studies to a subject search strategy in MEDLINE may reduce the records needed to screen while preserving adequate search sensitivity for routine clinical use. © 2015 Asian Pacific Society of Nephrology.
NASA Technical Reports Server (NTRS)
Michaud, N. H.
1979-01-01
A system of independent computer programs for the processing of digitized pulse code modulated (PCM) and frequency modulated (FM) data is described. Information is stored in a set of random files and accessed to produce both statistical and graphical output. The software system is designed primarily to present these reports within a twenty-four hour period for quick analysis of the helicopter's performance.
F.I.D.O. Focused Integration for Debris Observation
NASA Astrophysics Data System (ADS)
Ploschnitznig, J.
2013-09-01
The fact that satellites play a growing role in our day-to-day live, contributes to the overall assessment that these assets must be protected. As more and more objects enter space and begin to clutter this apparently endless vacuum, we begin to realize that these objects and associated debris become a potential and recurring threat. The space surveillance community routinely attempts to catalog debris through broad area search collection profiles, hoping to detect and track smaller and smaller objects. There are technical limitations to each collection system, we propose there may be new ways to increase the detection capability, effectively "Teaching an old dog (FIDO), new tricks." Far too often, we are justly criticized for never "stepping out of the box". The philosophy of "if it's not broke, don't fix it" works great if you assume that we are not broke. The assumption that in order to "Find" new space junk we need to increase our surveillance windows and try to cover as much space as possible may be appropriate for Missile Defense, but inappropriate for finding small space debris. Currently, our Phased Array Early Warning Systems support this yearly search program to try to acquire and track space small debris. A phased array can electronically scan the horizons very quickly, but the radar does have limitations. There is a closed-loop resource management equation that must be satisfied. By increasing search volume, we effectively reduce our instantaneous sensitivity which will directly impact our ability to find smaller and smaller space debris. Our proposal will be to focus on increasing sensitivity by reducing the search volume to statistically high probability of detection volumes in space. There are two phases to this proposal, a theoretical and empirical. Theoretical: The first phase will be to investigate the current space catalog and use existing ephemeris data on all satellites in the Space Surveillance Catalog to identify volumes of space with a high likelihood of encountering transiting satellite. Also during this phase, candidate radar systems will be characterized to determine sensitivity levels necessary to detect certain sized objects. Data integration plays a critical role in lowering the noise floor of the collection area in order to detect smaller and smaller objects. Reducing the search volume to these high probability of intercept areas will allow the use of data integration to increase the likelihood of detection of small Radar Cross Section objects. Empirical: The next phase is to employ this technique using a legacy collection system. The collection community may choose any collection system. The goal will be to demonstrate how focusing on a very specific area and employing data integration will increase the likelihood of detection of smaller objects. This will result in the creation of an Inter Range Vector (IRV), which can be handed-off to downrange collection systems for additional tracking. The goal of FIDO will be demonstrate how these legacy systems can be better employed to help find smaller and smaller debris.
46 CFR 154.540 - Quick-closing shut-off valves: Emergency shut-down system.
Code of Federal Regulations, 2011 CFR
2011-10-01
... BULK DANGEROUS CARGOES SAFETY STANDARDS FOR SELF-PROPELLED VESSELS CARRYING BULK LIQUEFIED GASES Design... emergency shut-down system that: (a) Closes all the valves; (b) Is actuated by a single control in at least two locations remote from the quick-closing valves; (c) Is actuated by a single control in each cargo...
46 CFR 154.540 - Quick-closing shut-off valves: Emergency shut-down system.
Code of Federal Regulations, 2010 CFR
2010-10-01
... BULK DANGEROUS CARGOES SAFETY STANDARDS FOR SELF-PROPELLED VESSELS CARRYING BULK LIQUEFIED GASES Design... emergency shut-down system that: (a) Closes all the valves; (b) Is actuated by a single control in at least two locations remote from the quick-closing valves; (c) Is actuated by a single control in each cargo...
ERIC Educational Resources Information Center
McGinnis, Kristy
2009-01-01
Taking a young child to the doctor is not always the easiest of tasks, even when the child does not have a disability. This can be seen in the sheer number of children's books on the subject. Using key words such as "going to the doctor," a quick search of Amazon.com's children's book listing brings up a list of over 1,200 books. While the books…
Creative foraging: An experimental paradigm for studying exploration and discovery
Mayo, Avraham E.; Mayo, Ruth; Rozenkrantz, Liron; Tendler, Avichai; Alon, Uri; Noy, Lior
2017-01-01
Creative exploration is central to science, art and cognitive development. However, research on creative exploration is limited by a lack of high-resolution automated paradigms. To address this, we present such an automated paradigm, the creative foraging game, in which people search for novel and valuable solutions in a large and well-defined space made of all possible shapes made of ten connected squares. Players discovered shape categories such as digits, letters, and airplanes as well as more abstract categories. They exploited each category, then dropped it to explore once again, and so on. Aligned with a prediction of optimal foraging theory (OFT), during exploration phases, people moved along meandering paths that are about three times longer than the shortest paths between shapes; when exploiting a category of related shapes, they moved along the shortest paths. The moment of discovery of a new category was usually done at a non-prototypical and ambiguous shape, which can serve as an experimental proxy for creative leaps. People showed individual differences in their search patterns, along a continuum between two strategies: a mercurial quick-to-discover/quick-to-drop strategy and a thorough slow-to-discover/slow-to-drop strategy. Contrary to optimal foraging theory, players leave exploitation to explore again far before categories are depleted. This paradigm opens the way for automated high-resolution study of creative exploration. PMID:28767668
Interdisciplinary eHealth Practice in Cancer Care: A Review of the Literature
Janssen, Anna; Hines, Monique; Nagarajan, Srivalli Vilapakkam; Kielly-Carroll, Candice; Shaw, Tim
2017-01-01
This review aimed to identify research that described how eHealth facilitates interdisciplinary cancer care and to understand the ways in which eHealth innovations are being used in this setting. An integrative review of eHealth interventions used for interdisciplinary care for people with cancer was conducted by systematically searching research databases in March 2015, and repeated in September 2016. Searches resulted in 8531 citations, of which 140 were retrieved and scanned in full, with twenty-six studies included in the review. Analysis of data extracted from the included articles revealed five broad themes: (i) data collection and accessibility; (ii) virtual multidisciplinary teams; (iii) communication between individuals involved in the delivery of health services; (iv) communication pathways between patients and cancer care teams; and (v) health professional-led change. Use of eHealth interventions in cancer care was widespread, particularly to support interdisciplinary care. However, research has focused on development and implementation of interventions, rather than on long-term impact. Further research is warranted to explore design, evaluation, and long-term sustainability of eHealth systems and interventions in interdisciplinary cancer care. Technology evolves quickly and researchers need to provide health professionals with timely guidance on how best to respond to new technologies in the health sector. PMID:29068377
Mining Social Media and Web Searches For Disease Detection
Yang, Y. Tony; Horneffer, Michael; DiLisio, Nicole
2013-01-01
Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate. PMID:25170475
Interdisciplinary eHealth Practice in Cancer Care: A Review of the Literature.
Janssen, Anna; Brunner, Melissa; Keep, Melanie; Hines, Monique; Nagarajan, Srivalli Vilapakkam; Kielly-Carroll, Candice; Dennis, Sarah; McKeough, Zoe; Shaw, Tim
2017-10-25
This review aimed to identify research that described how eHealth facilitates interdisciplinary cancer care and to understand the ways in which eHealth innovations are being used in this setting. An integrative review of eHealth interventions used for interdisciplinary care for people with cancer was conducted by systematically searching research databases in March 2015, and repeated in September 2016. Searches resulted in 8531 citations, of which 140 were retrieved and scanned in full, with twenty-six studies included in the review. Analysis of data extracted from the included articles revealed five broad themes: (i) data collection and accessibility; (ii) virtual multidisciplinary teams; (iii) communication between individuals involved in the delivery of health services; (iv) communication pathways between patients and cancer care teams; and (v) health professional-led change. Use of eHealth interventions in cancer care was widespread, particularly to support interdisciplinary care. However, research has focused on development and implementation of interventions, rather than on long-term impact. Further research is warranted to explore design, evaluation, and long-term sustainability of eHealth systems and interventions in interdisciplinary cancer care. Technology evolves quickly and researchers need to provide health professionals with timely guidance on how best to respond to new technologies in the health sector.
Mining social media and web searches for disease detection.
Yang, Y Tony; Horneffer, Michael; DiLisio, Nicole
2013-04-28
Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate.
A quality evaluation methodology of health web-pages for non-professionals.
Currò, Vincenzo; Buonuomo, Paola Sabrina; Onesimo, Roberta; de Rose, Paola; Vituzzi, Andrea; di Tanna, Gian Luca; D'Atri, Alessandro
2004-06-01
The proposal of an evaluation methodology for determining the quality of healthcare web sites for the dissemination of medical information to non-professionals. Three (macro) factors are considered for the quality evaluation: medical contents, accountability of the authors, and usability of the web site. Starting from two results in the literature the problem of whether or not to introduce a weighting function has been investigated. This methodology has been validated on a specialized information content, i.e., sore throats, due to the large interest such a topic enjoys with target users. The World Wide Web was accessed using a meta-search system merging several search engines. A statistical analysis was made to compare the proposed methodology with the obtained ranks of the sample web pages. The statistical analysis confirms that the variables examined (per item and sub factor) show substantially similar ranks and are capable of contributing to the evaluation of the main quality macro factors. A comparison between the aggregation functions in the proposed methodology (non-weighted averages) and the weighting functions, derived from the literature, allowed us to verify the suitability of the method. The proposed methodology suggests a simple approach which can quickly award an overall quality score for medical web sites oriented to non-professionals.
Aviation System Analysis Capability Quick Response System Report
NASA Technical Reports Server (NTRS)
Roberts, Eileen; Villani, James A.; Ritter, Paul
1998-01-01
The purpose of this document is to present the additions and modifications made to the Aviation System Analysis Capability (ASAC) Quick Response System (QRS) in FY 1997 in support of the ASAC ORS development effort. This document contains an overview of the project background and scope and defines the QRS. The document also presents an overview of the Logistics Management Institute (LMI) facility that supports the QRS, and it includes a summary of the planned additions to the QRS in FY 1998. The document has five appendices.
HEALTH GeoJunction: place-time-concept browsing of health publications.
MacEachren, Alan M; Stryker, Michael S; Turton, Ian J; Pezanowski, Scott
2010-05-18
The volume of health science publications is escalating rapidly. Thus, keeping up with developments is becoming harder as is the task of finding important cross-domain connections. When geographic location is a relevant component of research reported in publications, these tasks are more difficult because standard search and indexing facilities have limited or no ability to identify geographic foci in documents. This paper introduces HEALTH GeoJunction, a web application that supports researchers in the task of quickly finding scientific publications that are relevant geographically and temporally as well as thematically. HEALTH GeoJunction is a geovisual analytics-enabled web application providing: (a) web services using computational reasoning methods to extract place-time-concept information from bibliographic data for documents and (b) visually-enabled place-time-concept query, filtering, and contextualizing tools that apply to both the documents and their extracted content. This paper focuses specifically on strategies for visually-enabled, iterative, facet-like, place-time-concept filtering that allows analysts to quickly drill down to scientific findings of interest in PubMed abstracts and to explore relations among abstracts and extracted concepts in place and time. The approach enables analysts to: find publications without knowing all relevant query parameters, recognize unanticipated geographic relations within and among documents in multiple health domains, identify the thematic emphasis of research targeting particular places, notice changes in concepts over time, and notice changes in places where concepts are emphasized. PubMed is a database of over 19 million biomedical abstracts and citations maintained by the National Center for Biotechnology Information; achieving quick filtering is an important contribution due to the database size. Including geography in filters is important due to rapidly escalating attention to geographic factors in public health. The implementation of mechanisms for iterative place-time-concept filtering makes it possible to narrow searches efficiently and quickly from thousands of documents to a small subset that meet place-time-concept constraints. Support for a more-like-this query creates the potential to identify unexpected connections across diverse areas of research. Multi-view visualization methods support understanding of the place, time, and concept components of document collections and enable comparison of filtered query results to the full set of publications.
NASA Technical Reports Server (NTRS)
Remington, Roger; Williams, Douglas
1986-01-01
Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.
Öllinger, Michael; Jones, Gary; Knoblich, Günther
2014-03-01
The nine-dot problem is often used to demonstrate and explain mental impasse, creativity, and out of the box thinking. The present study investigated the interplay of a restricted initial search space, the likelihood of invoking a representational change, and the subsequent constraining of an unrestricted search space. In three experimental conditions, participants worked on different versions of the nine-dot problem that hinted at removing particular sources of difficulty from the standard problem. The hints were incremental such that the first suggested a possible route for a solution attempt; the second additionally indicated the dot at which lines meet on the solution path; and the final condition also provided non-dot locations that appear in the solution path. The results showed that in the experimental conditions, representational change is encountered more quickly and problems are solved more often than for the control group. We propose a cognitive model that focuses on general problem-solving heuristics and representational change to explain problem difficulty.
FIREDOC users manual, 3rd edition
NASA Astrophysics Data System (ADS)
Jason, Nora H.
1993-12-01
FIREDOC is the on-line bibliographic database which reflects the holdings (published reports, journal articles, conference proceedings, books, and audiovisual items) of the Fire Research Information Services (FRIS) at the Building and Fire Research Laboratory (BFRL), National Institute of Standards and Technology (NIST). This manual provides step-by-step procedures for entering and exiting the database via telecommunication lines, as well as a number of techniques for searching the database and processing the results of the searches. This Third Edition is necessitated by the change to a UNIX platform. The new computer allows for faster response time if searching via a modem and, in addition, offers internet accessibility. FIREDOC may be used with personal computers, using DOS or Windows, or with Macintosh computers and workstations. A new section on how to access Internet is included, and one on how to obtain the references of interest to you. Appendix F: Quick Guide to Getting Started will be useful to both modem and Internet users.
Search and Graph Database Technologies for Biomedical Semantic Indexing: Experimental Analysis.
Segura Bedmar, Isabel; Martínez, Paloma; Carruana Martín, Adrián
2017-12-01
Biomedical semantic indexing is a very useful support tool for human curators in their efforts for indexing and cataloging the biomedical literature. The aim of this study was to describe a system to automatically assign Medical Subject Headings (MeSH) to biomedical articles from MEDLINE. Our approach relies on the assumption that similar documents should be classified by similar MeSH terms. Although previous work has already exploited the document similarity by using a k-nearest neighbors algorithm, we represent documents as document vectors by search engine indexing and then compute the similarity between documents using cosine similarity. Once the most similar documents for a given input document are retrieved, we rank their MeSH terms to choose the most suitable set for the input document. To do this, we define a scoring function that takes into account the frequency of the term into the set of retrieved documents and the similarity between the input document and each retrieved document. In addition, we implement guidelines proposed by human curators to annotate MEDLINE articles; in particular, the heuristic that says if 3 MeSH terms are proposed to classify an article and they share the same ancestor, they should be replaced by this ancestor. The representation of the MeSH thesaurus as a graph database allows us to employ graph search algorithms to quickly and easily capture hierarchical relationships such as the lowest common ancestor between terms. Our experiments show promising results with an F1 of 69% on the test dataset. To the best of our knowledge, this is the first work that combines search and graph database technologies for the task of biomedical semantic indexing. Due to its horizontal scalability, ElasticSearch becomes a real solution to index large collections of documents (such as the bibliographic database MEDLINE). Moreover, the use of graph search algorithms for accessing MeSH information could provide a support tool for cataloging MEDLINE abstracts in real time. ©Isabel Segura Bedmar, Paloma Martínez, Adrián Carruana Martín. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 01.12.2017.
NASA Astrophysics Data System (ADS)
Frommer, Joshua B.
This work develops and implements a solution framework that allows for an integrated solution to a resource allocation system-of-systems problem associated with designing vehicles for integration into an existing fleet to extend that fleet's capability while improving efficiency. Typically, aircraft design focuses on using a specific design mission while a fleet perspective would provide a broader capability. Aspects of design for both the vehicles and missions may be, for simplicity, deterministic in nature or, in a model that reflects actual conditions, uncertain. Toward this end, the set of tasks or goals for the to-be-planned system-of-systems will be modeled more accurately with non-deterministic values, and the designed platforms will be evaluated using reliability analysis. The reliability, defined as the probability of a platform or set of platforms to complete possible missions, will contribute to the fitness of the overall system. The framework includes building surrogate models for metrics such as capability and cost, and includes the ideas of reliability in the overall system-level design space. The concurrent design and allocation system-of-systems problem is a multi-objective mixed integer nonlinear programming (MINLP) problem. This study considered two system-of-systems problems that seek to simultaneously design new aircraft and allocate these aircraft into a fleet to provide a desired capability. The Coast Guard's Integrated Deepwater System program inspired the first problem, which consists of a suite of search-and-find missions for aircraft based on descriptions from the National Search and Rescue Manual. The second represents suppression of enemy air defense operations similar to those carried out by the U.S. Air Force, proposed as part of the Department of Defense Network Centric Warfare structure, and depicted in MILSTD-3013. The two problems seem similar, with long surveillance segments, but because of the complex nature of aircraft design, the analysis of the vehicle for high-speed attack combined with a long loiter period is considerably different from that for quick cruise to an area combined with a low speed search. However, the framework developed to solve this class of system-of-systems problem handles both scenarios and leads to a solution type for this kind of problem. On the vehicle-level of the problem, different technology can have an impact on the fleet-level. One such technology is Morphing, the ability to change shape, which is an ideal candidate technology for missions with dissimilar segments, such as the aforementioned two. A framework, using surrogate models based on optimally-sized aircraft, and using probabilistic parameters to define a concept of operations, is investigated; this has provided insight into the setup of the optimization problem, the use of the reliability metric, and the measurement of fleet level impacts of morphing aircraft. The research consisted of four phases. The two initial phases built and defined the framework to solve system-of-systems problem; these investigations used the search-and-find scenario as the example application. The first phase included the design of fixed-geometry and morphing aircraft for a range of missions and evaluated the aircraft capability using non-deterministic mission parameters. The second phase introduced the idea of multiple aircraft in a fleet, but only considered a fleet consisting of one aircraft type. The third phase incorporated the simultaneous design of a new vehicle and allocation into a fleet for the search-and-find scenario; in this phase, multiple types of aircraft are considered. The fourth phase repeated the simultaneous new aircraft design and fleet allocation for the SEAD scenario to show that the approach is not specific to the search-and-find scenario. The framework presented in this work appears to be a viable approach for concurrently designing and allocating constituents in a system, specifically aircraft in a fleet. The research also shows that new technology impact can be assessed at the fleet level using conceptual design principles.
Kingfisher: a system for remote sensing image database management
NASA Astrophysics Data System (ADS)
Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.
2003-04-01
At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.
Radiotherapy supporting system based on the image database using IS&C magneto-optical disk
NASA Astrophysics Data System (ADS)
Ando, Yutaka; Tsukamoto, Nobuhiro; Kunieda, Etsuo; Kubo, Atsushi
1994-05-01
Since radiation oncologists make the treatment plan by prior experience, information about previous cases is helpful in planning the radiation treatment. We have developed an supporting system for the radiation therapy. The case-based reasoning method was implemented in order to search old treatments and images of past cases. This system evaluates similarities between the current case and all stored cases (case base). The portal images of the similar cases can be retrieved for reference images, as well as treatment records which show examples of the radiation treatment. By this system radiotherapists can easily make suitable plans of the radiation therapy. This system is useful to prevent inaccurate plannings due to preconceptions and/or lack of knowledge. Images were stored into magneto-optical disks and the demographic data is recorded to the hard disk which is equipped in the personal computer. Images can be displayed quickly on the radiotherapist's demands. The radiation oncologist can refer past cases which are recorded in the case base and decide the radiation treatment of the current case. The file and data format of magneto-optical disk is the IS&C format. This format provides the interchangeability and reproducibility of the medical information which includes images and other demographic data.
Attention-based long-lasting sensitization and suppression of colors.
Tseng, Chia-Huei; Vidnyanszky, Zoltan; Papathomas, Thomas; Sperling, George
2010-02-22
In contrast to the short-duration and quick reversibility of attention, a long-term sensitization to color based on protracted attention in a visual search task was reported by Tseng, Gobell, and Sperling (2004). When subjects were trained for a few hours to search for a red object among colored distracters, sensitivity to red was increased for weeks. This sensitization was quantified using ambiguous motion displays containing isoluminant red-green and texture-contrast gratings, in which the perceived motion-direction depended both on the attended color and on the relative red-green saturation. Such long-term effects could result from either sensitization of the attended color, or suppression of unattended colors, or a combination of the two. Here we unconfound these effects by eliminating one of the paired colors of the motion display from the search task. The other paired color in the motion display can then be either a target or a distracter in the search task. Thereby, we separately measure the effect of attention on sensitizing the target color or suppressing distracter colors. The results indicate that only sensitization of the target color in the search task is statistically significant for the present experimental conditions. We conclude that selective attention to a color in our visual search task caused long-term sensitization to the attended color but not significant long-term suppression of the unattended color. Copyright 2009 Elsevier Ltd. All rights reserved.
A comparative study of six European databases of medically oriented Web resources.
Abad García, Francisca; González Teruel, Aurora; Bayo Calduch, Patricia; de Ramón Frias, Rosa; Castillo Blasco, Lourdes
2005-10-01
The paper describes six European medically oriented databases of Web resources, pertaining to five quality-controlled subject gateways, and compares their performance. The characteristics, coverage, procedure for selecting Web resources, record structure, searching possibilities, and existence of user assistance were described for each database. Performance indicators for each database were obtained by means of searches carried out using the key words, "myocardial infarction." Most of the databases originated in the 1990s in an academic or library context and include all types of Web resources of an international nature. Five databases use Medical Subject Headings. The number of fields per record varies between three and nineteen. The language of the search interfaces is mostly English, and some of them allow searches in other languages. In some databases, the search can be extended to Pubmed. Organizing Medical Networked Information, Catalogue et Index des Sites Médicaux Francophones, and Diseases, Disorders and Related Topics produced the best results. The usefulness of these databases as quick reference resources is clear. In addition, their lack of content overlap means that, for the user, they complement each other. Their continued survival faces three challenges: the instability of the Internet, maintenance costs, and lack of use in spite of their potential usefulness.
Utz, Kathrin S.; Hankeln, Thomas M. A.; Jung, Lena; Lämmer, Alexandra; Waschbisch, Anne; Lee, De-Hyung; Linker, Ralf A.; Schenk, Thomas
2013-01-01
Background Despite the high frequency of cognitive impairment in multiple sclerosis, its assessment has not gained entrance into clinical routine yet, due to lack of time-saving and suitable tests for patients with multiple sclerosis. Objective The aim of the study was to compare the paradigm of visual search with neuropsychological standard tests, in order to identify the test that discriminates best between patients with multiple sclerosis and healthy individuals concerning cognitive functions, without being susceptible to practice effects. Methods Patients with relapsing remitting multiple sclerosis (n = 38) and age-and gender-matched healthy individuals (n = 40) were tested with common neuropsychological tests and a computer-based visual search task, whereby a target stimulus has to be detected amongst distracting stimuli on a touch screen. Twenty-eight of the healthy individuals were re-tested in order to determine potential practice effects. Results Mean reaction time reflecting visual attention and movement time indicating motor execution in the visual search task discriminated best between healthy individuals and patients with multiple sclerosis, without practice effects. Conclusions Visual search is a promising instrument for the assessment of cognitive functions and potentially cognitive changes in patients with multiple sclerosis thanks to its good discriminatory power and insusceptibility to practice effects. PMID:24282604
Minimoon Survey with Subaru Hyper Suprime-Cam
NASA Astrophysics Data System (ADS)
Jedicke, Robert; Boe, Ben; Bolin, Bryce T.; Bottke, William; Chyba, Monique; Denneau, Larry; Dodds, Curt; Granvik, Mikael; Kleyna, Jan; Weryk, Robert J.
2017-10-01
We will present the status of our search for minimoons using Hyper Suprime-Cam on the Subaru telescope on Maunkea, Hawaii. We use the term 'minimoon' to refer to objects that are gravitationally bound to the Earth-Moon system, make at least one revolution around the barycenter in a co-rotating frame relative to the Earth-Sun axis, and are within 3 Earth Hill-sphere radii (˜12 LD). There are one or two 1 to 2 meter diameter minimoons in the steady state population at any time, and about a dozen larger than 50 cm diameter. `Drifters' are also bound to the Earth-Moon system but make less than one revolution about the barycenter. The combined population of minimoons and drifters provide a new opportunity for scientific exploration of small asteroids and testing concepts for in-situ resource utilization. These objects provide interesting challenges for rendezvous missions because of their limited lifetime and complicated trajectories. Furthermore, they are difficult to detect because they are small, available for a limited time period, and move quickly across the sky.
Lab on a Chip Application Development for Exploration
NASA Technical Reports Server (NTRS)
Monaco, Lisa
2004-01-01
At Marshall Space Flight Center a new capability has been established to aid the advancement of microfluidics for space flight monitoring systems. Lab-On-a-Chip Application Development (LOCAD) team has created a program for advancing Technology Readiness Levels (TRL) of 1 & 2 to TRL 6 and 7, quickly and economically for Lab-On-a-Chip (LOC) applications. Scientists and engineers can utilize LOCAD's process to efficiently learn about microfluidics and determine if microfluidics is applicable to their needs. Once the applicability has been determined, LOCAD can then perform tests to develop the new fluidic protocols which are different from macro-scale chemical reaction protocols. With this information new micro-devices can be created such as the development of a microfluidic system to aid in the search for life, past and present, on Mars. Particular indicators in the Martian soil can contain the direct evidence of life. But to extract the information from the soil and present it to the proper detectors requires multiple fluidic/chemical operations. This is where LOCAD is providing its unique abilities.
Han, Min; Fan, Jianchao; Wang, Jun
2011-09-01
A dynamic feedforward neural network (DFNN) is proposed for predictive control, whose adaptive parameters are adjusted by using Gaussian particle swarm optimization (GPSO) in the training process. Adaptive time-delay operators are added in the DFNN to improve its generalization for poorly known nonlinear dynamic systems with long time delays. Furthermore, GPSO adopts a chaotic map with Gaussian function to balance the exploration and exploitation capabilities of particles, which improves the computational efficiency without compromising the performance of the DFNN. The stability of the particle dynamics is analyzed, based on the robust stability theory, without any restrictive assumption. A stability condition for the GPSO+DFNN model is derived, which ensures a satisfactory global search and quick convergence, without the need for gradients. The particle velocity ranges could change adaptively during the optimization process. The results of a comparative study show that the performance of the proposed algorithm can compete with selected algorithms on benchmark problems. Additional simulation results demonstrate the effectiveness and accuracy of the proposed combination algorithm in identifying and controlling nonlinear systems with long time delays.
Pre-fire warning system and method using a perfluorocarbon tracer
Dietz, R.N.; Senum, G.I.
1994-11-08
A composition and method are disclosed for detecting thermal overheating of an apparatus or system and for quickly and accurately locating the portions of the apparatus or system that experience a predetermined degree of such overheating. A composition made according to the invention includes perfluorocarbon tracers (PFTs) mixed with certain non-reactive carrier compounds that are effective to trap or block the PFTs within the composition at normal room temperature or at normal operating temperature of the coated apparatus or system. When a predetermined degree of overheating occurs in any of the coated components of the apparatus or system, PFTs are emitted from the compositions at a rate corresponding to the degree of overheating of the component. An associated PFT detector (or detectors) is provided and monitored to quickly identify the type of PFTs emitted so that the PFTs can be correlated with the respective PFT in the coating compositions applied on respective components in the system, thereby to quickly and accurately localize the source of the overheating of such components. 4 figs.
Pre-fire warning system and method using a perfluorocarbon tracer
Dietz, Russell N.; Senum, Gunnar I.
1994-01-01
A composition and method for detecting thermal overheating of an apparatus or system and for quickly and accurately locating the portions of the apparatus or system that experience a predetermined degree of such overheating. A composition made according to the invention includes perfluorocarbon tracers (PFTs) mixed with certain non-reactive carrier compounds that are effective to trap or block the PFTs within the composition at normal room temperature or at normal operating temperature of the coated apparatus or system. When a predetermined degree of overheating occurs in any of the coated components of the apparatus or system, PFTs are emitted from the compositions at a rate corresponding to the degree of overheating of the component. An associated PFT detector (or detectors) is provided and monitored to quickly identify the type of PFTs emitted so that the PFTs can be correlated with the respective PFT in the coating compositions applied on respective components in the system, thereby to quickly and accurately localize the source of the overheating of such components.
Simulated Raman Spectral Analysis of Organic Molecules
NASA Astrophysics Data System (ADS)
Lu, Lu
The advent of the laser technology in the 1960s solved the main difficulty of Raman spectroscopy, resulted in simplified Raman spectroscopy instruments and also boosted the sensitivity of the technique. Up till now, Raman spectroscopy is commonly used in chemistry and biology. As vibrational information is specific to the chemical bonds, Raman spectroscopy provides fingerprints to identify the type of molecules in the sample. In this thesis, we simulate the Raman Spectrum of organic and inorganic materials by General Atomic and Molecular Electronic Structure System (GAMESS) and Gaussian, two computational codes that perform several general chemistry calculations. We run these codes on our CPU-based high-performance cluster (HPC). Through the message passing interface (MPI), a standardized and portable message-passing system which can make the codes run in parallel, we are able to decrease the amount of time for computation and increase the sizes and capacities of systems simulated by the codes. From our simulations, we will set up a database that allows search algorithm to quickly identify N-H and O-H bonds in different materials. Our ultimate goal is to analyze and identify the spectra of organic matter compositions from meteorites and compared these spectra with terrestrial biologically-produced amino acids and residues.
DPS Planetary Science Graduate Programs Listing: A Resource for Students and Advisors
NASA Astrophysics Data System (ADS)
Klassen, David R.; Roman, Anthony; Meinke, Bonnie
2015-11-01
We began a web page on the DPS Education site in 2013 listing all the graduate programs we could find that can lead to a PhD with a planetary science focus. Since then the static page has evolved into a database-driven, filtered-search site. It is intended to be a useful resource for both undergraduate students and undergraduate advisers, allowing them to find and compare programs across a basic set of search criteria. From the filtered list users can click on links to get a "quick look" at the database information and follow links to the program main site.The reason for such a list is because planetary science is a heading that covers an extremely diverse set of disciplines. The usual case is that planetary scientists are housed in a discipline-placed department so that finding them is typically not easy—undergraduates cannot look for a Planetary Science department, but must (somehow) know to search for them in all their possible places. This can overwhelm even determined undergraduate student, and even many advisers!We present here the updated site and a walk-through of the basic features. In addition we ask for community feedback on additional features to make the system more usable for them. Finally, we call upon those mentoring and advising undergraduates to use this resource, and program admission chairs to continue to review their entry and provide us with the most up-to-date information.The URL for our site is http://dps.aas.org/education/graduate-schools.
Mountain Search and Rescue with Remotely Piloted Aircraft Systems
NASA Astrophysics Data System (ADS)
Silvagni, Mario; Tonoli, Andrea; Zenerino, Enrico; Chiaberge, Marcello
2016-04-01
Remotely Piloted Aircraft Systems (RPAS) also known as Unmanned Aerial Systems (UAS) are nowadays becoming more and more popular in several applications. Even though a complete regulation is not yet available all over the world, researches, tests and some real case applications are wide spreading. These technologies can bring many benefits also to the mountain operations especially in emergencies and harsh environmental conditions, such as Search and Rescue (SAR) and avalanche rescue missions. In fact, during last decade, the number of people practicing winter sports in backcountry environment is increased and one of the greatest hazards for recreationists and professionals are avalanches. Often these accidents have severe consequences leading, mostly, to asphyxia-related death, which is confirmed by the hard drop of survival probability after ten minutes from the burying. Therefore, it is essential to minimize the time of burial. Modern avalanche beacon (ARTVA) interface guides the rescuer during the search phase reducing its time. Even if modern avalanche beacons are valid and reliable, the seeking range influences the rescue time. Furthermore, the environment and morphologic conditions of avalanches usually complicates the rescues. The recursive methodology of this kind of searching offers the opportunity to use automatic device like drones (RPAS). These systems allow performing all the required tasks autonomously, with high accuracy and without exposing the rescuers to additional risks due to secondary avalanches. The availability of highly integrated electronics and subsystems specifically meant for the applications, better batteries, miniaturized payload and, in general, affordable prices, has led to the availability of small RPAS with very good performances that can give interesting application opportunities in unconventional environments. The present work is one of the outcome from the experience made by the authors in RPAS fields and in Mechatronics devices for Mountain Safety and shows the design, construction and testing of a multipurpose RPAS to be used in mountain operations. The flying, multi-rotors based, platform and its embedded avionics is designed to meet environmental requirements such as temperature, altitude and wind, assuring the capability of carrying different payloads (separately or together) aimed to: • Avalanche Beacon search with automatic signal recognition and path following algorithms for quick buried identification. • Visual (visible and InfraRed) search and rescue for identifying missing persons on snow and woods even during night. • Customizable payload deployment to drop emergency kits or specific explosive cartridge for controlled avalanche detachment. The resulting small (less than 5kg) RPA is capable of full autonomous flight (including take-off and landing) on a pre-programmed, or easily configurable, custom mission. Furthermore, the embedded autopilot manages the sensors measurements (i.e. beacons or cameras) to update the flying mission. Specific features such as laser altimeter for terrain following have been developed and implemented. Remote control of the RPA from a ground station is available and a possible infrastructure, based on cloud/on-line architecture, for the real application is presented.
Fukushima, Hiroshi; Kobayashi, Masaki; Kawano, Keizo; Morimoto, Shinji
2018-06-01
The Third International Consensus Definitions for Sepsis and Septic Shock Task Force proposed a new definition of sepsis based on the SOFA (Sequential [Sepsis-related] Organ Failure Assessment) score and introduced a novel scoring system, quickSOFA, to screen patients at high risk for sepsis. However, the clinical usefulness of these systems is unclear. Therefore, we investigated predictive performance for mortality in patients with acute pyelonephritis associated with upper urinary tract calculi. This retrospective study included 141 consecutive patients who were clinically diagnosed with acute pyelonephritis associated with upper urinary tract calculi outside the intensive care unit. We evaluated the performance of the quickSOFA, SOFA and SIRS (systemic inflammatory response syndrome) scores to predict in-hospital mortality and intensive care unit admission using the AUC of the ROC curve, net reclassification, integrated discrimination improvements and decision curve analysis. A total of 11 patients (8%) died in the hospital and 26 (18%) were admitted to the intensive care unit. The AUC of quickSOFA to predict in-hospital mortality and intensive care unit admission was significantly greater than that of SIRS (each p <0.001) and comparable to that of SOFA (p = 0.47 and 0.57, respectively). When incorporated into the baseline model consisting of patient age, gender and the Charlson Comorbidity Index, quickSOFA and SOFA provided a greater change in AUC, and in net classification and integrated discrimination improvements than SIRS for each outcome. Decision curve analyses revealed that the quickSOFA and SOFA incorporated models showed a superior net benefit compared to the SIRS incorporated model for most examined probabilities of the 2 outcomes. The in-hospital mortality rate of patients with a quickSOFA score of 2 or greater and a SOFA score of 7 or greater, which were the optimal cutoffs determined by the Youden index, was 18% and 28%, respectively. SOFA and quickSOFA are more clinically useful scoring systems than SIRS to predict mortality in patients with acute pyelonephritis associated with upper urinary tract calculi. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
2010-01-01
Receiverships in Haiti and the Dominican Republic.3 The Navy ran the Virgin Islands until 1931, Guam until 1950, and Ameri- can Samoa until 1951...in Puerto Rico, at both ends of the Canal Zone in Panama, at Guantanamo Bay in Cuba, and in the Virgin Islands. In addition, interest was...Panamanian holiday island of Taboga. Panama quickly followed the US lead in declaring war on the Central Powers, but its major role was harassing resident
Stealthy River Navigation in Jungle Combat Conditions
2010-03-01
information necessary for a supply boat to travel from a depot to a resupply point that minimizes weighted risk , which is defined as the product of shade...weapons tend to rust quickly, and must be cleaned and oiled more frequently than in most other areas , canvas items rot and rubber deteriorates much...cells and the cell itself. Hence the number of neighbor of a cell v = 7 except the case when the cell is near to the search area boundary. A single
Translations on Narcotics and Dangerous Drugs, Number 312
1977-07-29
to be sold P wn’v Sol Tn another search, seven blank tubes and two orders f 200 tLs te discovered. The aspects were identified as Miss...he shot a couple in the La Brasa restaurant, and made a quick get-away on a motor- cycle. Sergio Anibal Velasquez Perez and Luz Elpidia Murillo...messengers Resistol 5,000, Pega Todo, sol - vents, aerosol and gasoline." As to these persons’ guilt, he said, We cannot say that they are directly
The Heliophysics Data Environment: Open Source, Open Systems and Open Data.
NASA Astrophysics Data System (ADS)
King, Todd; Roberts, Aaron; Walker, Raymond; Thieman, James
2012-07-01
The Heliophysics Data Environment (HPDE) is a place for scientific discovery. Today the Heliophysics Data Environment is a framework of technologies, standards and services which enables the international community to collaborate more effectively in space physics research. Crafting a framework for a data environment begins with defining a model of the tasks to be performed, then defining the functional aspects and the work flow. The foundation of any data environment is an information model which defines the structure and content of the metadata necessary to perform the tasks. In the Heliophysics Data Environment the information model is the Space Physics Archive Search and Extract (SPASE) model and available resources are described by using this model. A described resource can reside anywhere on the internet which makes it possible for a national archive, mission, data center or individual researcher to be a provider. The generated metadata is shared, reviewed and harvested to enable services. Virtual Observatories use the metadata to provide community based portals. Through unique identifiers and registry services tools can quickly discover and access data available anywhere on the internet. This enables a researcher to quickly view and analyze data in a variety of settings and enhances the Heliophysics Data Environment. To illustrate the current Heliophysics Data Environment we present the design, architecture and operation of the Heliophysics framework. We then walk through a real example of using available tools to investigate the effects of the solar wind on Earth's magnetosphere.
Automatic document classification of biological literature
Chen, David; Müller, Hans-Michael; Sternberg, Paul W
2006-01-01
Background Document classification is a wide-spread problem with many applications, from organizing search engine snippets to spam filtering. We previously described Textpresso, a text-mining system for biological literature, which marks up full text according to a shallow ontology that includes terms of biological interest. This project investigates document classification in the context of biological literature, making use of the Textpresso markup of a corpus of Caenorhabditis elegans literature. Results We present a two-step text categorization algorithm to classify a corpus of C. elegans papers. Our classification method first uses a support vector machine-trained classifier, followed by a novel, phrase-based clustering algorithm. This clustering step autonomously creates cluster labels that are descriptive and understandable by humans. This clustering engine performed better on a standard test-set (Reuters 21578) compared to previously published results (F-value of 0.55 vs. 0.49), while producing cluster descriptions that appear more useful. A web interface allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. Conclusion We have demonstrated a simple method to classify biological documents that embodies an improvement over current methods. While the classification results are currently optimized for Caenorhabditis elegans papers by human-created rules, the classification engine can be adapted to different types of documents. We have demonstrated this by presenting a web interface that allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. PMID:16893465
Improving the User Experience of Finding and Visualizing Oceanographic Data
NASA Astrophysics Data System (ADS)
Rauch, S.; Allison, M. D.; Groman, R. C.; Chandler, C. L.; Galvarino, C.; Gegg, S. R.; Kinkade, D.; Shepherd, A.; Wiebe, P. H.; Glover, D. M.
2013-12-01
Searching for and locating data of interest can be a challenge to researchers as increasing volumes of data are made available online through various data centers, repositories, and archives. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is keenly aware of this challenge and, as a result, has implemented features and technologies aimed at improving data discovery and enhancing the user experience. BCO-DMO was created in 2006 to manage and publish data from research projects funded by the Division of Ocean Sciences (OCE) Biological and Chemical Oceanography Sections and the Division of Polar Programs (PLR) Antarctic Sciences Organisms and Ecosystems Program (ANT) of the US National Science Foundation (NSF). The BCO-DMO text-based and geospatial-based data access systems provide users with tools to search, filter, and visualize data in order to efficiently find data of interest. The geospatial interface, developed using a suite of open-source software (including MapServer [1], OpenLayers [2], ExtJS [3], and MySQL [4]), allows users to search and filter/subset metadata based on program, project, or deployment, or by using a simple word search. The map responds based on user selections, presents options that allow the user to choose specific data parameters (e.g., a species or an individual drifter), and presents further options for visualizing those data on the map or in "quick-view" plots. The data managed and made available by BCO-DMO are very heterogeneous in nature, from in-situ biogeochemical, ecological, and physical data, to controlled laboratory experiments. Due to the heterogeneity of the data types, a 'one size fits all' approach to visualization cannot be applied. Datasets are visualized in a way that will best allow users to assess fitness for purpose. An advanced geospatial interface, which contains a semantically-enabled faceted search [5], is also available. These search facets are highly interactive and responsive, allowing users to construct their own custom searches by applying multiple filters. New filtering and visualization tools are continually being added to the BCO-DMO system as new data types are encountered and as we receive feedback from our data contributors and users. As our system becomes more complex, teaching users about the many interactive features becomes increasingly important. Tutorials and videos are made available online. Recent in-person classroom-style tutorials have proven useful for both demonstrating our system to users and for obtaining feedback to further improve the user experience. References: [1] University of Minnesota. MapServer: Open source web mapping. http://www.mapserver.org [2] OpenLayers: Free Maps for the Web. http://www.openlayers.org [3] Sencha. ExtJS. http://www.sencha.com/products/extjs [4] MySQL. http://www.mysql.com/ [5] Maffei, A. R., Rozell, E. A., West, P., Zednik, S., and Fox, P. A. 2011. Open Standards and Technologies in the S2S Framework. Abstract IN31A-1435 presented at American Geophysical Union 2011 Fall Meeting, San Francisco, CA, 7 December 2011.
Stoop, Nicky; Menendez, Mariano E; Mellema, Jos J; Ring, David
2018-01-01
The objective of this study is to evaluate the construct validity of the Patient-Reported Outcomes Measurement Information System (PROMIS) Global Health instrument by establishing its correlation to the Quick-Disabilities of the Arm, Shoulder and Hand (QuickDASH) questionnaire in patients with upper extremity illness. A cohort of 112 patients completed a sociodemographic survey and the PROMIS Global Health and QuickDASH questionnaires. Pearson correlation coefficients were used to evaluate the association of the QuickDASH with the PROMIS Global Health items and subscales. Six of the 10 PROMIS Global Health items were associated with the QuickDASH. The PROMIS Global Physical Health subscale showed moderate correlation with QuickDASH and the Mental Health subscale. There was no significant relationship between the PROMIS Global Mental Health subscale and QuickDASH. The consistent finding that general patient-reported outcomes correlate moderately with regional patient-reported outcomes suggests that a small number of relatively nonspecific patient-reported outcome measures might be used to assess a variety of illnesses. In our opinion, the blending of physical and mental health questions in the PROMIS Global Health makes this instrument less useful for research or patient care.
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Murphy, K. J.; Baynes, K.; Lynnes, C.
2016-12-01
With the volume of Earth observation data expanding rapidly, cloud computing is quickly changing the way Earth observation data is processed, analyzed, and visualized. The cloud infrastructure provides the flexibility to scale up to large volumes of data and handle high velocity data streams efficiently. Having freely available Earth observation data collocated on a cloud infrastructure creates opportunities for innovation and value-added data re-use in ways unforeseen by the original data provider. These innovations spur new industries and applications and spawn new scientific pathways that were previously limited due to data volume and computational infrastructure issues. NASA, in collaboration with Amazon, Google, and Microsoft, have jointly developed a set of recommendations to enable efficient transfer of Earth observation data from existing data systems to a cloud computing infrastructure. The purpose of these recommendations is to provide guidelines against which all data providers can evaluate existing data systems and be used to improve any issues uncovered to enable efficient search, access, and use of large volumes of data. Additionally, these guidelines ensure that all cloud providers utilize a common methodology for bulk-downloading data from data providers thus preventing the data providers from building custom capabilities to meet the needs of individual cloud providers. The intent is to share these recommendations with other Federal agencies and organizations that serve Earth observation to enable efficient search, access, and use of large volumes of data. Additionally, the adoption of these recommendations will benefit data users interested in moving large volumes of data from data systems to any other location. These data users include the cloud providers, cloud users such as scientists, and other users working in a high performance computing environment who need to move large volumes of data.
Evaluative reports on medical malpractice policies in obstetrics: a rapid scoping review.
Cardoso, Roberta; Zarin, Wasifa; Nincic, Vera; Barber, Sarah Louise; Gulmezoglu, Ahmet Metin; Wilson, Charlotte; Wilson, Katherine; McDonald, Heather; Kenny, Meghan; Warren, Rachel; Straus, Sharon E; Tricco, Andrea C
2017-09-06
The clinical specialty of obstetrics is under particular scrutiny with increasing litigation costs and unnecessary tests and procedures done in attempts to prevent litigation. We aimed to identify reports evaluating or comparing the effectiveness of medical liability reforms and quality improvement strategies in improving litigation-related outcomes in obstetrics. We conducted a rapid scoping review with a 6-week timeline. MEDLINE, EMBASE, LexisNexis Academic, the Legal Scholarship Network, Justis, LegalTrac, QuickLaw, and HeinOnline were searched for publications in English from 2004 until June 2015. The selection criteria for screening were established a priori and pilot-tested. We included reports comparing or evaluating the impact of obstetrics-related medical liability reforms and quality improvement strategies on cost containment and litigation settlement across all countries. All levels of screening were done by two reviewers independently, and discrepancies were resolved by a third reviewer. In addition, two reviewers independently extracted relevant data using a pre-tested form, and discrepancies were resolved by a third reviewer. The results were summarized descriptively. The search resulted in 2729 citations, of which 14 reports met our eligibility criteria. Several initiatives for improving the medical malpractice litigation system were found, including no-fault approaches, patient safety policy initiatives, communication and resolution, caps on compensation and attorney fees, alternative payment system and liabilities, and limitations on litigation. Only a few litigation policies in obstetrics were evaluated or compared. Included documents showed that initiatives to reduce medical malpractice litigation could be associated with a decrease in adverse and malpractice events. However, due to heterogeneous settings (e.g., economic structure, healthcare system) and variation in the outcomes reported, the advantages and disadvantages of initiatives may vary.
Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L
2017-06-28
Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.
Multipass Target Search in Natural Environments
Otte, Michael W.; Sofge, Donald; Gupta, Satyandra K.
2017-01-01
Consider a disaster scenario where search and rescue workers must search difficult to access buildings during an earthquake or flood. Often, finding survivors a few hours sooner results in a dramatic increase in saved lives, suggesting the use of drones for expedient rescue operations. Entropy can be used to quantify the generation and resolution of uncertainty. When searching for targets, maximizing mutual information of future sensor observations will minimize expected target location uncertainty by minimizing the entropy of the future estimate. Motion planning for multi-target autonomous search requires planning over an area with an imperfect sensor and may require multiple passes, which is hindered by the submodularity property of mutual information. Further, mission duration constraints must be handled accordingly, requiring consideration of the vehicle’s dynamics to generate feasible trajectories and must plan trajectories spanning the entire mission duration, something which most information gathering algorithms are incapable of doing. If unanticipated changes occur in an uncertain environment, new plans must be generated quickly. In addition, planning multipass trajectories requires evaluating path dependent rewards, requiring planning in the space of all previously selected actions, compounding the problem. We present an anytime algorithm for autonomous multipass target search in natural environments. The algorithm is capable of generating long duration dynamically feasible multipass coverage plans that maximize mutual information using a variety of techniques such as ϵ-admissible heuristics to speed up the search. To the authors’ knowledge this is the first attempt at efficiently solving multipass target search problems of such long duration. The proposed algorithm is based on best first branch and bound and is benchmarked against state of the art algorithms adapted to the problem in natural Simplex environments, gathering the most information in the given search time. PMID:29099087
A low-cost PC-based telemetry data-reduction system
NASA Astrophysics Data System (ADS)
Simms, D. A.; Butterfield, C. P.
1990-04-01
The Solar Energy Research Institute's (SERI) Wind Research Branch is using Pulse Code Modulation (PCM) telemetry data-acquisition systems to study horizontal-axis wind turbines. PCM telemetry systems are used in test installations that require accurate multiple-channel measurements taken from a variety of different locations. SERI has found them ideal for use in tests requiring concurrent acquisition of data-reduction system to facilitate quick, in-the-field multiple-channel data analysis. Called the PC-PCM System, it consists of two basic components. First, AT-compatible hardware boards are used for decoding and combining PCM data streams. Up to four hardware boards can be installed in a single PC, which provides the capability to combine data from four PCM streams directly to PC disk or memory. Each stream can have up to 62 data channels. Second, a software package written for the DOS operating system was developed to simplify data-acquisition control and management. The software provides a quick, easy-to-use interface between the PC and PCM data streams. Called the Quick-Look Data Management Program, it is a comprehensive menu-driven package used to organize, acquire, process, and display information from incoming PCM data streams. This paper describes both hardware and software aspects of the SERI PC-PCM system, concentrating on features that make it useful in an experiment test environment to quickly examine and verify incoming data. Also discussed are problems and techniques associated with PC-based telemetry data acquisition, processing, and real-time display.
Incident Management in Academic Information System using ITIL Framework
NASA Astrophysics Data System (ADS)
Palilingan, V. R.; Batmetan, J. R.
2018-02-01
Incident management is very important in order to ensure the continuity of a system. Information systems require incident management to ensure information systems can provide maximum service according to the service provided. Many of the problems that arise in academic information systems come from incidents that are not properly handled. The objective of this study aims to find the appropriate way of incident management. The incident can be managed so it will not be a big problem. This research uses the ITIL framework to solve incident problems. The technique used in this study is a technique adopted and developed from the service operations section of the ITIL framework. The results of this research found that 84.5% of incidents appearing in academic information systems can be handled quickly and appropriately. 15.5% incidents can be escalated so as to not cause any new problems. The model of incident management applied to make academic information system can run quickly in providing academic service in a good and efficient. The incident management model implemented in this research is able to manage resources appropriately so as to quickly and easily manage incidents.
1972-02-21
is a two-sided strategic nuclear exchange war gaming system. It is designed co assist the military planner in examining various facets of strategic...substantial, the data base preparation process is designed to provide an efficient means of assembling, maintaining, and organizing an input data base to... designed to assist in the study of &’trategic conflicts involving a large-scale Pexchange of nuclear weapons. The system is structured into five
NASA Technical Reports Server (NTRS)
LaMora, Andy; Raugh, A.; Erickson, K.; Grayzeck, E. J.; Knopf, W.; Morgan, T. H.
2012-01-01
NASA PDS hosts terabytes of valuable data from hundreds of data sources and spans decades of research. Data is stored on flat-file systems regulated through careful meta dictionaries. PDS's data is available to the public through its website which supports data searches through drill-down navigation. While the system returns data quickly, result sets in response to identical input differ depending on the drill-down path a user follows. To correct this Issue, to allow custom searching, and to improve general accessibility, PDS sought to create a new data structure and API, and to use them to build applications that are a joy to use and showcase the value of the data to students, teachers and citizens. PDS engaged TopCoder and Harvard Business School through the NTL to pursue these objectives in a pilot effort. Scope was limited to Small Bodies Node data. NTL analyzed data, proposed a solution, and implemented it through a series of micro-contests. Contest focused on different segments of the problem; conceptualization, architectural design, implementation, testing, etc. To demonstrate the utility of the completed solution, NTL developed web-based and mobile applications that can compare targets, regardless of mission. To further explore the potential of the solution NTL hosted "Mash-up" challenges that integrated the API with other publically available assets, to produce consumer and teaching applications, including an Augmented Reality iPad tool. Two contests were also posted to middle and high school students via the NoNameSite.com platform, and as a result of these contests, PDS/SBN has initiated a Facebook program. These contests defined and implemented a data warehouse with the necessary migration tools to transform legacy data, produced a public web interface for the new search, developed a public API, and produced four mobile applications that we expect to appeal to users both within and, without the academic community.
NASA Astrophysics Data System (ADS)
LaMora, Andy; Raugh, A.; Erickson, K.; Grayzeck, E. J.; Knopf, W.; Lydon, M.; Lakhani, K.; Crusan, J.; Morgan, T. H.
2012-10-01
NASA PDS hosts terabytes of valuable data from hundreds of data sources and spans decades of research. Data is stored on flat-file systems regulated through careful meta dictionaries. PDS’s data is available to the public through its website which supports data searches through drill-down navigation. While the system returns data quickly, result sets in response to identical input differ depending on the drill-down path a user follows. To correct this issue, to allow custom searching, and to improve general accessibility, PDS sought to create a new data structure and API, and to use them to build applications that are a joy to use and showcase the value of the data to students, teachers and citizens. PDS engaged TopCoder and Harvard Business School through the NTL to pursue these objectives in a pilot effort. Scope was limited to Small Bodies Node data. NTL analyzed data, proposed a solution, and implemented it through a series of micro-contests. Contest focused on different segments of the problem; conceptualization, architectural design, implementation, testing, etc. To demonstrate the utility of the completed solution, NTL developed web-based and mobile applications that can compare targets, regardless of mission. To further explore the potential of the solution NTL hosted “Mash-up” challenges that integrated the API with other publically available assets, to produce consumer and teaching applications, including an Augmented Reality iPad tool. Two contests were also posted to middle and high school students via the NoNameSite.com platform, and as a result of these contests, PDS/SBN has initiated a Facebook program. These contests defined and implemented a data warehouse with the necessary migration tools to transform legacy data, produced a public web interface for the new search, developed a public API, and produced four mobile applications that we expect to appeal to users both within and without the academic community.
Boros, L G; Lepow, C; Ruland, F; Starbuck, V; Jones, S; Flancbaum, L; Townsend, M C
1992-07-01
A powerful method of processing MEDLINE and CINAHL source data uploaded to the IBM 3090 mainframe computer through an IBM/PC is described. Data are first downloaded from the CD-ROM's PC devices to floppy disks. These disks then are uploaded to the mainframe computer through an IBM/PC equipped with WordPerfect text editor and computer network connection (SONNGATE). Before downloading, keywords specifying the information to be accessed are typed at the FIND prompt of the CD-ROM station. The resulting abstracts are downloaded into a file called DOWNLOAD.DOC. The floppy disks containing the information are simply carried to an IBM/PC which has a terminal emulation (TELNET) connection to the university-wide computer network (SONNET) at the Ohio State University Academic Computing Services (OSU ACS). The WordPerfect (5.1) processes and saves the text into DOS format. Using the File Transfer Protocol (FTP, 130,000 bytes/s) of SONNET, the entire text containing the information obtained through the MEDLINE and CINAHL search is transferred to the remote mainframe computer for further processing. At this point, abstracts in the specified area are ready for immediate access and multiple retrieval by any PC having network switch or dial-in connection after the USER ID, PASSWORD and ACCOUNT NUMBER are specified by the user. The system provides the user an on-line, very powerful and quick method of searching for words specifying: diseases, agents, experimental methods, animals, authors, and journals in the research area downloaded. The user can also copy the TItles, AUthors and SOurce with optional parts of abstracts into papers under edition. This arrangement serves the special demands of a research laboratory by handling MEDLINE and CINAHL source data resulting after a search is performed with keywords specified for ongoing projects. Since the Ohio State University has a centrally founded mainframe system, the data upload, storage and mainframe operations are free.
Ayers, John W; Ribisl, Kurt; Brownstein, John S
2011-03-16
Smokers can use the web to continue or quit their habit. Online vendors sell reduced or tax-free cigarettes lowering smoking costs, while health advocates use the web to promote cessation. We examined how smokers' tax avoidance and smoking cessation Internet search queries were motivated by the United States' (US) 2009 State Children's Health Insurance Program (SCHIP) federal cigarette excise tax increase and two other state specific tax increases. Google keyword searches among residents in a taxed geography (US or US state) were compared to an untaxed geography (Canada) for two years around each tax increase. Search data were normalized to a relative search volume (RSV) scale, where the highest search proportion was labeled 100 with lesser proportions scaled by how they relatively compared to the highest proportion. Changes in RSV were estimated by comparing means during and after the tax increase to means before the tax increase, across taxed and untaxed geographies. The SCHIP tax was associated with an 11.8% (95% confidence interval [95%CI], 5.7 to 17.9; p<.001) immediate increase in cessation searches; however, searches quickly abated and approximated differences from pre-tax levels in Canada during the months after the tax. Tax avoidance searches increased 27.9% (95%CI, 15.9 to 39.9; p<.001) and 5.3% (95%CI, 3.6 to 7.1; p<.001) during and in the months after the tax compared to Canada, respectively, suggesting avoidance is the more pronounced and durable response. Trends were similar for state-specific tax increases but suggest strong interactive processes across taxes. When the SCHIP tax followed Florida's tax, versus not, it promoted more cessation and avoidance searches. Efforts to combat tax avoidance and increase cessation may be enhanced by using interventions targeted and tailored to smokers' searches. Search query surveillance is a valuable real-time, free and public method, that may be generalized to other behavioral, biological, informational or psychological outcomes manifested online.
Rapid Acquisition of Army Command and Control Systems
2014-01-01
Research and Engineering (Plans and Programs). 63 Glenn Fogg , “How to Better Support the Need for Quick Reaction...Pocket,” Army Communicator, Summer 2005. Fogg , Glenn, “How to Better Support the Need for Quick Reaction Capabilities in an Irregular Warfare
OlyMPUS - The Ontology-based Metadata Portal for Unified Semantics
NASA Astrophysics Data System (ADS)
Huffer, E.; Gleason, J. L.
2015-12-01
The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support data consumers and data providers, enabling the latter to register their data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS leverages the semantics and reasoning capabilities of ODISEES to provide data producers with a semi-automated interface for producing the semantically rich metadata needed to support ODISEES' data discovery and access services. It integrates the ODISEES metadata search system with multiple NASA data delivery tools to enable data consumers to create customized data sets for download to their computers, or for NASA Advanced Supercomputing (NAS) facility registered users, directly to NAS storage resources for access by applications running on NAS supercomputers. A core function of NASA's Earth Science Division is research and analysis that uses the full spectrum of data products available in NASA archives. Scientists need to perform complex analyses that identify correlations and non-obvious relationships across all types of Earth System phenomena. Comprehensive analytics are hindered, however, by the fact that many Earth science data products are disparate and hard to synthesize. Variations in how data are collected, processed, gridded, and stored, create challenges for data interoperability and synthesis, which are exacerbated by the sheer volume of available data. Robust, semantically rich metadata can support tools for data discovery and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Such capabilities are critical to enabling the research activities integral to NASA's strategic plans. However, as metadata requirements increase and competing standards emerge, metadata provisioning becomes increasingly burdensome to data producers. The OlyMPUS system helps data providers produce semantically rich metadata, making their data more accessible to data consumers, and helps data consumers quickly discover and download the right data for their research.
NASA Astrophysics Data System (ADS)
Sui, Liansheng; Xu, Minjie; Tian, Ailing
2017-04-01
A novel optical image encryption scheme is proposed based on quick response code and high dimension chaotic system, where only the intensity distribution of encoded information is recorded as ciphertext. Initially, the quick response code is engendered from the plain image and placed in the input plane of the double random phase encoding architecture. Then, the code is encrypted to the ciphertext with noise-like distribution by using two cascaded gyrator transforms. In the process of encryption, the parameters such as rotation angles and random phase masks are generated as interim variables and functions based on Chen system. A new phase retrieval algorithm is designed to reconstruct the initial quick response code in the process of decryption, in which a priori information such as three position detection patterns is used as the support constraint. The original image can be obtained without any energy loss by scanning the decrypted code with mobile devices. The ciphertext image is the real-valued function which is more convenient for storing and transmitting. Meanwhile, the security of the proposed scheme is enhanced greatly due to high sensitivity of initial values of Chen system. Extensive cryptanalysis and simulation have performed to demonstrate the feasibility and effectiveness of the proposed scheme.
Management of scientific information with Google Drive.
Kubaszewski, Łukasz; Kaczmarczyk, Jacek; Nowakowski, Andrzej
2013-09-20
The amount and diversity of scientific publications requires a modern management system. By "management" we mean the process of gathering interesting information for the purpose of reading and archiving for quick access in future clinical practice and research activity. In the past, such system required physical existence of a library, either institutional or private. Nowadays in an era dominated by electronic information, it is natural to migrate entire systems to a digital form. In the following paper we describe the structure and functions of an individual electronic library system (IELiS) for the management of scientific publications based on the Google Drive service. Architecture of the system. Architecture system consists of a central element and peripheral devices. Central element of the system is virtual Google Drive provided by Google Inc. Physical elements of the system include: tablet with Android operating system and a personal computer, both with internet access. Required software includes a program to view and edit files in PDF format for mobile devices and another to synchronize the files. Functioning of the system. The first step in creating a system is collection of scientific papers in PDF format and their analysis. This step is performed most frequently on a tablet. At this stage, after being read, the papers are cataloged in a system of folders and subfolders, according to individual demands. During this stage, but not exclusively, the PDF files are annotated by the reader. This allows the user to quickly track down interesting information in review or research process. Modification of the document title is performed at this stage, as well. Second element of the system is creation of a mirror database in the Google Drive virtual memory. Modified and cataloged papers are synchronized with Google Drive. At this stage, a fully functional scientific information electronic library becomes available online. The third element of the system is a periodic two-way synchronization of data between Google Drive and tablet, as occasional modification of the files with annotation or recataloging may be performed at both locations. The system architecture is designed to gather, catalog and analyze scientific publications. All steps are electronic, eliminating paper forms. Indexed files are available for re-reading and modification. The system allows for fast access to full-text search with additional features making research easier. Team collaboration is also possible with full control of user privileges. Particularly important is the safety of collected data. In our opinion, the system exceeds many commercially available applications in terms of functionality and versatility.
A robust omnifont open-vocabulary Arabic OCR system using pseudo-2D-HMM
NASA Astrophysics Data System (ADS)
Rashwan, Abdullah M.; Rashwan, Mohsen A.; Abdel-Hameed, Ahmed; Abdou, Sherif; Khalil, A. H.
2012-01-01
Recognizing old documents is highly desirable since the demand for quickly searching millions of archived documents has recently increased. Using Hidden Markov Models (HMMs) has been proven to be a good solution to tackle the main problems of recognizing typewritten Arabic characters. These attempts however achieved a remarkable success for omnifont OCR under very favorable conditions, they didn't achieve the same performance in practical conditions, i.e. noisy documents. In this paper we present an omnifont, large-vocabulary Arabic OCR system using Pseudo Two Dimensional Hidden Markov Model (P2DHMM), which is a generalization of the HMM. P2DHMM offers a more efficient way to model the Arabic characters, such model offer both minimal dependency on the font size/style (omnifont), and high level of robustness against noise. The evaluation results of this system are very promising compared to a baseline HMM system and best OCRs available in the market (Sakhr and NovoDynamics). The recognition accuracy of the P2DHMM classifier is measured against the classic HMM classifier, the average word accuracy rates for P2DHMM and HMM classifiers are 79% and 66% respectively. The overall system accuracy is measured against Sakhr and NovoDynamics OCR systems, the average word accuracy rates for P2DHMM, NovoDynamics, and Sakhr are 74%, 71%, and 61% respectively.
Decoupling the scholarly journal
Priem, Jason; Hemminger, Bradley M.
2011-01-01
Although many observers have advocated the reform of the scholarly publishing system, improvements to functions like peer review have been adopted sluggishly. We argue that this is due to the tight coupling of the journal system: the system's essential functions of archiving, registration, dissemination, and certification are bundled together and siloed into tens of thousands of individual journals. This tight coupling makes it difficult to change any one aspect of the system, choking out innovation. We suggest that the solution is the “decoupled journal (DcJ).” In this system, the functions are unbundled and performed as services, able to compete for patronage and evolve in response to the market. For instance, a scholar might deposit an article in her institutional repository, have it copyedited and typeset by one company, indexed for search by several others, self-marketed over her own social networks, and peer reviewed by one or more stamping agencies that connect her paper to external reviewers. The DcJ brings publishing out of its current seventeenth-century paradigm, and creates a Web-like environment of loosely joined pieces—a marketplace of tools that, like the Web, evolves quickly in response to new technologies and users' needs. Importantly, this system is able to evolve from the current one, requiring only the continued development of bolt-on services external to the journal, particularly for peer review. PMID:22493574
Short-term and long-term attentional biases to frequently encountered target features.
Sha, Li Z; Remington, Roger W; Jiang, Yuhong V
2017-07-01
It has long been known that frequently occurring targets are attended better than infrequent ones in visual search. But does this frequency-based attentional prioritization reflect momentary or durable changes in attention? Here we observed both short-term and long-term attentional biases for visual features as a function of different types of statistical associations between the targets, distractors, and features. Participants searched for a target, a line oriented horizontally or vertically among diagonal distractors, and reported its length. In one set of experiments we manipulated the target's color probability: Targets were more often in Color 1 than in Color 2. The distractors were in other colors. Participants found Color 1 targets more quickly than Color 2 targets, but this preference disappeared immediately when the target's color became random in the subsequent testing phase. In the other set of experiments, we manipulated the diagnostic values of the two colors: Color 1 was more often a target than a distractor; Color 2 was more often a distractor than a target. Participants found Color 1 targets more quickly than Color 2 targets. Importantly, and in contrast to the first set of experiments, the featural preference was sustained in the testing phase. These results suggest that short-term and long-term attentional biases are products of different statistical information. Finding a target momentarily activates its features, inducing short-term repetition priming. Long-term changes in attention, on the other hand, may rely on learning diagnostic features of the targets.
Contextual cueing: implicit learning and memory of visual context guides spatial attention.
Chun, M M; Jiang, Y
1998-06-01
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
Aviation System Analysis Capability (ASAC) Quick Response System (QRS) Test Report
NASA Technical Reports Server (NTRS)
Roberts, Eileen; Villani, James A.; Ritter, Paul
1997-01-01
This document is the Aviation System Analysis Capability (ASAC) Quick Response System (QRS) Test Report. The purpose of this document is to present the results of the QRS unit and system tests in support of the ASAC QRS development effort. This document contains an overview of the project background and scope, defines the QRS system and presents the additions made to the QRS this year, explains the assumptions, constraints, and approach used to conduct QRS Unit and System Testing, and presents the schedule used to perform QRS Testing. The document also presents an overview of the Logistics Management Institute (LMI) Test Facility and testing environment and summarizes the QRS Unit and System Test effort and results.
How do primary care physicians seek answers to clinical questions? A literature review.
Coumou, Herma C H; Meijman, Frans J
2006-01-01
The authors investigated the extent to which changes occurred between 1992 and 2005 in the ways that primary care physicians seek answers to clinical problems. What search strategies are used? How much time is spent on them? How do primary care physicians evaluate various search activities and information sources? Can a clinical librarian be useful to a primary care physician? Twenty-one original research papers and three literature reviews were examined. No systematic reviews were identified. Primary care physicians seek answers to only a limited number of questions about which they first consult colleagues and paper sources. This practice has basically not changed over the years despite the enormous increase in and better accessibility to electronic information sources. One of the major obstacles is the time it takes to search for information. Other difficulties primary care physicians experience are related to formulating an appropriate search question, finding an optimal search strategy, and interpreting the evidence found. Some studies have been done on the supporting role of a clinical librarian in general practice. However, the effects on professional behavior of the primary care physician and on patient outcome have not been studied. A small group of primary care physicians prefer this support to developing their own search skills. Primary care physicians have several options for finding quick answers: building a question-and-answer database, consulting filtered information sources, or using an intermediary such as a clinical librarian.
How do primary care physicians seek answers to clinical questions? A literature review
Coumou, Herma C. H.; Meijman, Frans J.
2006-01-01
Objectives: The authors investigated the extent to which changes occurred between 1992 and 2005 in the ways that primary care physicians seek answers to clinical problems. What search strategies are used? How much time is spent on them? How do primary care physicians evaluate various search activities and information sources? Can a clinical librarian be useful to a primary care physician? Methods: Twenty-one original research papers and three literature reviews were examined. No systematic reviews were identified. Results: Primary care physicians seek answers to only a limited number of questions about which they first consult colleagues and paper sources. This practice has basically not changed over the years despite the enormous increase in and better accessibility to electronic information sources. One of the major obstacles is the time it takes to search for information. Other difficulties primary care physicians experience are related to formulating an appropriate search question, finding an optimal search strategy, and interpreting the evidence found. Some studies have been done on the supporting role of a clinical librarian in general practice. However, the effects on professional behavior of the primary care physician and on patient outcome have not been studied. A small group of primary care physicians prefer this support to developing their own search skills. Discussion: Primary care physicians have several options for finding quick answers: building a question-and-answer database, consulting filtered information sources, or using an intermediary such as a clinical librarian. PMID:16404470
Kang, Hahk-Soo
2017-02-01
Genomics-based methods are now commonplace in natural products research. A phylogeny-guided mining approach provides a means to quickly screen a large number of microbial genomes or metagenomes in search of new biosynthetic gene clusters of interest. In this approach, biosynthetic genes serve as molecular markers, and phylogenetic trees built with known and unknown marker gene sequences are used to quickly prioritize biosynthetic gene clusters for their metabolites characterization. An increase in the use of this approach has been observed for the last couple of years along with the emergence of low cost sequencing technologies. The aim of this review is to discuss the basic concept of a phylogeny-guided mining approach, and also to provide examples in which this approach was successfully applied to discover new natural products from microbial genomes and metagenomes. I believe that the phylogeny-guided mining approach will continue to play an important role in genomics-based natural products research.
A review on equipped hospital beds with wireless sensor networks for reducing bedsores
Ajami, Sima; Khaleghi, Lida
2015-01-01
At present, the solutions to prevent bedsore include using various techniques for movement and displacement of patients, which is not possible for some patients or dangerous for some of them while it also poses problems for health care providers. On the other hand, development of information technology in the health care system including application of wireless sensor networks (WSNs) has led to easy and quick service-providing. It can provide a solution to prevent bedsore in motionless and disabled patients. Hence, the aim of this article was first to introduce WSNs in hospital beds and second, to identify the benefits and challenges in implementing this technology. This study was a nonsystematic review. The literature was searched for WSNs to reduce and prevent bedsores with the help of libraries, databases (PubMed, SCOPUS, and EMBASE), and also searches engines available at Google Scholar including during 1974-2014 while the inclusion criteria were applied in English and Persian. In our searches, we employed the following keywords and their combinations: “wireless sensor network,” “smart bed,” “information technology,” “smart mattress,” and “bedsore” in the searching areas of titles, keywords, abstracts, and full texts. In this study, more than 45 articles and reports were collected and 37 of them were selected based on their relevance. Therefore, identification and implementation of this technology will be a step toward mechanization of traditional procedures in providing care for hospitalized patients and disabled people. The smart bed and mattress, either alone or in combination with the other technologies, should be capable of providing all of the novel features while still providing the comfort and safety features usually associated with traditional and hospital mattresses. It can eliminate the expense of bedsore in the intensive care unit (ICU) department in the hospital and save much expense there. PMID:26929768
STS-3 Induced Environment Contamination Monitor (IECM): Quick-look report
NASA Technical Reports Server (NTRS)
Miller, E. R. (Editor); Fountain, J. A. (Editor)
1982-01-01
The STS-3/Induced Environment Contamination Monitor (IECM) mission is described. The IECM system performance is discussed, and IECM mission time events are briefly described. Quick look analyses are presented for each of the 10 instruments comprising the IECM on the flight of STS-3. Finally, a short summary is presented and plans are discussed for future IECM flights, and opportunities for direct mapping of Orbiter effluents using the Remote manipulator System.
The relationship between visual search and categorization of own- and other-age faces.
Craig, Belinda M; Lipp, Ottmar V
2018-03-13
Young adult participants are faster to detect young adult faces in crowds of infant and child faces than vice versa. These findings have been interpreted as evidence for more efficient attentional capture by own-age than other-age faces, but could alternatively reflect faster rejection of other-age than own-age distractors, consistent with the previously reported other-age categorization advantage: faster categorization of other-age than own-age faces. Participants searched for own-age faces in other-age backgrounds or vice versa. Extending the finding to different other-age groups, young adult participants were faster to detect young adult faces in both early adolescent (Experiment 1) and older adult backgrounds (Experiment 2). To investigate whether the own-age detection advantage could be explained by faster categorization and rejection of other-age background faces, participants in experiments 3 and 4 also completed an age categorization task. Relatively faster categorization of other-age faces was related to relatively faster search through other-age backgrounds on target absent trials but not target present trials. These results confirm that other-age faces are more quickly categorized and searched through and that categorization and search processes are related; however, this correlational approach could not confirm or reject the contribution of background face processing to the own-age detection advantage. © 2018 The British Psychological Society.
Hillstrom, Anne P; Segabinazi, Joice D; Godwin, Hayward J; Liversedge, Simon P; Benson, Valerie
2017-02-19
We explored the influence of early scene analysis and visible object characteristics on eye movements when searching for objects in photographs of scenes. On each trial, participants were shown sequentially either a scene preview or a uniform grey screen (250 ms), a visual mask, the name of the target and the scene, now including the target at a likely location. During the participant's first saccade during search, the target location was changed to: (i) a different likely location, (ii) an unlikely but possible location or (iii) a very implausible location. The results showed that the first saccade landed more often on the likely location in which the target re-appeared than on unlikely or implausible locations, and overall the first saccade landed nearer the first target location with a preview than without. Hence, rapid scene analysis influenced initial eye movement planning, but availability of the target rapidly modified that plan. After the target moved, it was found more quickly when it appeared in a likely location than when it appeared in an unlikely or implausible location. The findings show that both scene gist and object properties are extracted rapidly, and are used in conjunction to guide saccadic eye movements during visual search.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
Drug treatment of inborn errors of metabolism: a systematic review
Alfadhel, Majid; Al-Thihli, Khalid; Moubayed, Hiba; Eyaid, Wafaa; Al-Jeraisy, Majed
2013-01-01
Background The treatment of inborn errors of metabolism (IEM) has seen significant advances over the last decade. Many medicines have been developed and the survival rates of some patients with IEM have improved. Dosages of drugs used for the treatment of various IEM can be obtained from a range of sources but tend to vary among these sources. Moreover, the published dosages are not usually supported by the level of existing evidence, and they are commonly based on personal experience. Methods A literature search was conducted to identify key material published in English in relation to the dosages of medicines used for specific IEM. Textbooks, peer reviewed articles, papers and other journal items were identified. The PubMed and Embase databases were searched for material published since 1947 and 1974, respectively. The medications found and their respective dosages were graded according to their level of evidence, using the grading system of the Oxford Centre for Evidence-Based Medicine. Results 83 medicines used in various IEM were identified. The dosages of 17 medications (21%) had grade 1 level of evidence, 61 (74%) had grade 4, two medications were in level 2 and 3 respectively, and three had grade 5. Conclusions To the best of our knowledge, this is the first review to address this matter and the authors hope that it will serve as a quickly accessible reference for medications used in this important clinical field. PMID:23532493
Selfies of Imperial Cormorants (Phalacrocorax atriceps): What Is Happening Underwater?
Gómez-Laich, Agustina; Yoda, Ken; Zavalaga, Carlos; Quintana, Flavio
2015-01-01
During the last few years, the development of animal-borne still cameras and video recorders has enabled researchers to observe what a wild animal sees in the field. In the present study, we deployed miniaturized video recorders to investigate the underwater foraging behavior of Imperial cormorants (Phalacrocorax atriceps). Video footage was obtained from 12 animals and 49 dives comprising a total of 8.1 h of foraging data. Video information revealed that Imperial cormorants are almost exclusively benthic feeders. While foraging along the seafloor, animals did not necessarily keep their body horizontal but inclined it downwards. The head of the instrumented animal was always visible in the videos and in the majority of the dives it was moved constantly forward and backward by extending and contracting the neck while travelling on the seafloor. Animals detected prey at very short distances, performed quick capture attempts and spent the majority of their time on the seafloor searching for prey. Cormorants foraged at three different sea bottom habitats and the way in which they searched for food differed between habitats. Dives were frequently performed under low luminosity levels suggesting that cormorants would locate prey with other sensory systems in addition to sight. Our video data support the idea that Imperial cormorants’ efficient hunting involves the use of specialized foraging techniques to compensate for their poor underwater vision. PMID:26367384
Selfies of Imperial Cormorants (Phalacrocorax atriceps): What Is Happening Underwater?
Gómez-Laich, Agustina; Yoda, Ken; Zavalaga, Carlos; Quintana, Flavio
2015-01-01
During the last few years, the development of animal-borne still cameras and video recorders has enabled researchers to observe what a wild animal sees in the field. In the present study, we deployed miniaturized video recorders to investigate the underwater foraging behavior of Imperial cormorants (Phalacrocorax atriceps). Video footage was obtained from 12 animals and 49 dives comprising a total of 8.1 h of foraging data. Video information revealed that Imperial cormorants are almost exclusively benthic feeders. While foraging along the seafloor, animals did not necessarily keep their body horizontal but inclined it downwards. The head of the instrumented animal was always visible in the videos and in the majority of the dives it was moved constantly forward and backward by extending and contracting the neck while travelling on the seafloor. Animals detected prey at very short distances, performed quick capture attempts and spent the majority of their time on the seafloor searching for prey. Cormorants foraged at three different sea bottom habitats and the way in which they searched for food differed between habitats. Dives were frequently performed under low luminosity levels suggesting that cormorants would locate prey with other sensory systems in addition to sight. Our video data support the idea that Imperial cormorants' efficient hunting involves the use of specialized foraging techniques to compensate for their poor underwater vision.
Vanduyfhuys, Louis; Vandenbrande, Steven; Verstraelen, Toon; Schmid, Rochus; Waroquier, Michel; Van Speybroeck, Veronique
2015-05-15
QuickFF is a software package to derive accurate force fields for isolated and complex molecular systems in a quick and easy manner. Apart from its general applicability, the program has been designed to generate force fields for metal-organic frameworks in an automated fashion. The force field parameters for the covalent interaction are derived from ab initio data. The mathematical expression of the covalent energy is kept simple to ensure robustness and to avoid fitting deficiencies as much as possible. The user needs to produce an equilibrium structure and a Hessian matrix for one or more building units. Afterward, a force field is generated for the system using a three-step method implemented in QuickFF. The first two steps of the methodology are designed to minimize correlations among the force field parameters. In the last step, the parameters are refined by imposing the force field parameters to reproduce the ab initio Hessian matrix in Cartesian coordinate space as accurate as possible. The method is applied on a set of 1000 organic molecules to show the easiness of the software protocol. To illustrate its application to metal-organic frameworks (MOFs), QuickFF is used to determine force fields for MIL-53(Al) and MOF-5. For both materials, accurate force fields were already generated in literature but they requested a lot of manual interventions. QuickFF is a tool that can easily be used by anyone with a basic knowledge of performing ab initio calculations. As a result, accurate force fields are generated with minimal effort. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Hunter, Louise
2012-01-01
To review the advantages and disadvantages of e-questionnaires, and question whether or not reported disadvantages remain valid or can be limited or circumvented. The internet is likely to become the dominant medium for survey distribution, yet nurses and midwives have been slow to use online technology for research involving questionnaires. Relatively little is known about optimal methods of harnessing the internet's potential in health studies. A small e-questionnaire of health workers. The Medline and Maternity and Infant Care databases were searched for articles containing the words 'web', 'online', or 'internet' and 'survey' or 'questionnaire'. The search was restricted to articles in English published since 2000. The reference lists of retrieved articles were also searched. Reported disadvantages of online data collection, such as sample bias, psychometric distortions, 'technophobia' and lower response rates are discussed and challenged. The author reports her experience of conducting a survey with an e-questionnaire to contribute to the limited body of knowledge in this area, and suggests how to maximise the quantity and quality of responses to e-questionnaires. E-questionnaires offer the researcher an inexpensive, quick and convenient way to collect data. Many of the reported disadvantages of the medium are no longer valid. The science of conducting the perfect e-survey is emerging. However, the lessons learned in the author's study, together with other research, seem to suggest that satisfactory response rates and data quality can be achieved in a relatively short time if certain tactics are used. To get the best results from e-questionnaires, it is suggested that the questionnaire recipients should be targeted carefully and that the value of their potential contribution to the project should be emphasised. E-questionnaires should be convenient, quick and easy to access, and be set out in a way that encourages full and complete responses.
NASA Astrophysics Data System (ADS)
Yang, Hao; Chen, Lei; Lei, Chong; Zhang, Ju; Li, Ding; Zhou, Zhi-Min; Bao, Chen-Chen; Hu, Heng-Yao; Chen, Xiang; Cui, Feng; Zhang, Shuang-Xi; Zhou, Yong; Cui, Da-Xiang
2010-07-01
Quick and parallel genotyping of human papilloma virus (HPV) type 16/18 is carried out by a specially designed giant magnetoimpedance (GMI) based microchannel system. Micropatterned soft magnetic ribbon exhibiting large GMI ratio serves as the biosensor element. HPV genotyping can be determined by the changes in GMI ratio in corresponding detection region after hybridization. The result shows that this system has great potential in future clinical diagnostics and can be easily extended to other biomedical applications based on molecular recognition.
HTP-NLP: A New NLP System for High Throughput Phenotyping.
Schlegel, Daniel R; Crowner, Chris; Lehoullier, Frank; Elkin, Peter L
2017-01-01
Secondary use of clinical data for research requires a method to quickly process the data so that researchers can quickly extract cohorts. We present two advances in the High Throughput Phenotyping NLP system which support the aim of truly high throughput processing of clinical data, inspired by a characterization of the linguistic properties of such data. Semantic indexing to store and generalize partially-processed results and the use of compositional expressions for ungrammatical text are discussed, along with a set of initial timing results for the system.
Shi, Lei; Wan, Youchuan; Gao, Xianjun
2018-01-01
In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721
Fast grasping of unknown objects using cylinder searching on a single point cloud
NASA Astrophysics Data System (ADS)
Lei, Qujiang; Wisse, Martijn
2017-03-01
Grasping of unknown objects with neither appearance data nor object models given in advance is very important for robots that work in an unfamiliar environment. The goal of this paper is to quickly synthesize an executable grasp for one unknown object by using cylinder searching on a single point cloud. Specifically, a 3D camera is first used to obtain a partial point cloud of the target unknown object. An original method is then employed to do post treatment on the partial point cloud to minimize the uncertainty which may lead to grasp failure. In order to accelerate the grasp searching, surface normal of the target object is then used to constrain the synthetization of the cylinder grasp candidates. Operability analysis is then used to select out all executable grasp candidates followed by force balance optimization to choose the most reliable grasp as the final grasp execution. In order to verify the effectiveness of our algorithm, Simulations on a Universal Robot arm UR5 and an under-actuated Lacquey Fetch gripper are used to examine the performance of this algorithm, and successful results are obtained.
NCBO Resource Index: Ontology-Based Search and Mining of Biomedical Resources
Jonquet, Clement; LePendu, Paea; Falconer, Sean; Coulet, Adrien; Noy, Natalya F.; Musen, Mark A.; Shah, Nigam H.
2011-01-01
The volume of publicly available data in biomedicine is constantly increasing. However, these data are stored in different formats and on different platforms. Integrating these data will enable us to facilitate the pace of medical discoveries by providing scientists with a unified view of this diverse information. Under the auspices of the National Center for Biomedical Ontology (NCBO), we have developed the Resource Index—a growing, large-scale ontology-based index of more than twenty heterogeneous biomedical resources. The resources come from a variety of repositories maintained by organizations from around the world. We use a set of over 200 publicly available ontologies contributed by researchers in various domains to annotate the elements in these resources. We use the semantics that the ontologies encode, such as different properties of classes, the class hierarchies, and the mappings between ontologies, in order to improve the search experience for the Resource Index user. Our user interface enables scientists to search the multiple resources quickly and efficiently using domain terms, without even being aware that there is semantics “under the hood.” PMID:21918645
Graph pyramids as models of human problem solving
NASA Astrophysics Data System (ADS)
Pizlo, Zygmunt; Li, Zheng
2004-05-01
Prior theories have assumed that human problem solving involves estimating distances among states and performing search through the problem space. The role of mental representation in those theories was minimal. Results of our recent experiments suggest that humans are able to solve some difficult problems quickly and accurately. Specifically, in solving these problems humans do not seem to rely on distances or on search. It is quite clear that producing good solutions without performing search requires a very effective mental representation. In this paper we concentrate on studying the nature of this representation. Our theory takes the form of a graph pyramid. To verify the psychological plausibility of this theory we tested subjects in a Euclidean Traveling Salesman Problem in the presence of obstacles. The role of the number and size of obstacles was tested for problems with 6-50 cities. We analyzed the effect of experimental conditions on solution time per city and on solution error. The main result is that time per city is systematically affected only by the size of obstacles, but not by their number, or by the number of cities.
Fast-kick-off monotonically convergent algorithm for searching optimal control fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel
2011-09-15
This Rapid Communication presents a fast-kick-off search algorithm for quickly finding optimal control fields in the state-to-state transition probability control problems, especially those with poorly chosen initial control fields. The algorithm is based on a recently formulated monotonically convergent scheme [T.-S. Ho and H. Rabitz, Phys. Rev. E 82, 026703 (2010)]. Specifically, the local temporal refinement of the control field at each iteration is weighted by a fractional inverse power of the instantaneous overlap of the backward-propagating wave function, associated with the target state and the control field from the previous iteration, and the forward-propagating wave function, associated with themore » initial state and the concurrently refining control field. Extensive numerical simulations for controls of vibrational transitions and ultrafast electron tunneling show that the new algorithm not only greatly improves the search efficiency but also is able to attain good monotonic convergence quality when further frequency constraints are required. The algorithm is particularly effective when the corresponding control dynamics involves a large number of energy levels or ultrashort control pulses.« less
NCBO Resource Index: Ontology-Based Search and Mining of Biomedical Resources.
Jonquet, Clement; Lependu, Paea; Falconer, Sean; Coulet, Adrien; Noy, Natalya F; Musen, Mark A; Shah, Nigam H
2011-09-01
The volume of publicly available data in biomedicine is constantly increasing. However, these data are stored in different formats and on different platforms. Integrating these data will enable us to facilitate the pace of medical discoveries by providing scientists with a unified view of this diverse information. Under the auspices of the National Center for Biomedical Ontology (NCBO), we have developed the Resource Index-a growing, large-scale ontology-based index of more than twenty heterogeneous biomedical resources. The resources come from a variety of repositories maintained by organizations from around the world. We use a set of over 200 publicly available ontologies contributed by researchers in various domains to annotate the elements in these resources. We use the semantics that the ontologies encode, such as different properties of classes, the class hierarchies, and the mappings between ontologies, in order to improve the search experience for the Resource Index user. Our user interface enables scientists to search the multiple resources quickly and efficiently using domain terms, without even being aware that there is semantics "under the hood."
Efficient embedding of complex networks to hyperbolic space via their Laplacian
Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.
2016-01-01
The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction. PMID:27445157
Automatically assisting human memory: a SenseCam browser.
Doherty, Aiden R; Moulin, Chris J A; Smeaton, Alan F
2011-10-01
SenseCams have many potential applications as tools for lifelogging, including the possibility of use as a memory rehabilitation tool. Given that a SenseCam can log hundreds of thousands of images per year, it is critical that these be presented to the viewer in a manner that supports the aims of memory rehabilitation. In this article we report a software browser constructed with the aim of using the characteristics of memory to organise SenseCam images into a form that makes the wealth of information stored on SenseCam more accessible. To enable a large amount of visual information to be easily and quickly assimilated by a user, we apply a series of automatic content analysis techniques to structure the images into "events", suggest their relative importance, and select representative images for each. This minimises effort when browsing and searching. We provide anecdotes on use of such a system and emphasise the need for SenseCam images to be meaningfully sorted using such a browser.
Efficient embedding of complex networks to hyperbolic space via their Laplacian
NASA Astrophysics Data System (ADS)
Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.
2016-07-01
The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction.
BiobankUniverse: automatic matchmaking between datasets for biobank data discovery and integration
Pang, Chao; Kelpin, Fleur; van Enckevort, David; Eklund, Niina; Silander, Kaisa; Hendriksen, Dennis; de Haan, Mark; Jetten, Jonathan; de Boer, Tommy; Charbon, Bart; Holub, Petr; Hillege, Hans; Swertz, Morris A
2017-01-01
Abstract Motivation Biobanks are indispensable for large-scale genetic/epidemiological studies, yet it remains difficult for researchers to determine which biobanks contain data matching their research questions. Results To overcome this, we developed a new matching algorithm that identifies pairs of related data elements between biobanks and research variables with high precision and recall. It integrates lexical comparison, Unified Medical Language System ontology tagging and semantic query expansion. The result is BiobankUniverse, a fast matchmaking service for biobanks and researchers. Biobankers upload their data elements and researchers their desired study variables, BiobankUniverse automatically shortlists matching attributes between them. Users can quickly explore matching potential and search for biobanks/data elements matching their research. They can also curate matches and define personalized data-universes. Availability and implementation BiobankUniverse is available at http://biobankuniverse.com or can be downloaded as part of the open source MOLGENIS suite at http://github.com/molgenis/molgenis. Contact m.a.swertz@rug.nl Supplementary information Supplementary data are available at Bioinformatics online. PMID:29036577
LCROSS - Lunar Impactor: Pioneering Risk-Tolerant Exploration in Search for Water on the Moon
NASA Technical Reports Server (NTRS)
Andrews, Daniel R.
2010-01-01
The Lunar CRater Observation and Sensing Satellite (LCROSS) was launched with the Lunar Reconnaissance Orbiter (LRO) on June 18, 2009 to determine the presence of water-ice in a permanently shadowed crater on the south pole of the Moon. However, an equally important purpose was to pioneer low-cost, quick-turnaround NASA missions that could accept a higher-than-normal-level of technical risk. When the LCROSS mission proposal was competitively selected by the NASA Exploration Systems Mission Directorate to design, build, and launch a spacecraft in 31 months with a $79M cost-capped budget and a fixed mass allocation, NASA Ames Research Center and its industry partner, Northrop-Grumman, needed a game-changing approach to be successful. That approach was a ground-breaking combination of having a risk-tolerant NASA Class D mission status and finding the right balance point between the inflexible elements of cost and schedule and the newly-flexible element of technical capability.
Organic Entrainment and Preservation in Volcanic Glasses
NASA Technical Reports Server (NTRS)
Wilhelm, Mary Beth; Ojha, Lujendra; Brunner, Anna E.; Dufek, Josef D.; Wray, James Joseph
2014-01-01
Unaltered pyroclastic deposits have previously been deemed to have "low" potential for the formation, concentration and preservation of organic material on the Martian surface. Yet volcanic glasses that have solidified very quickly after an eruption may be good candidates for containment and preservation of refractory organic material that existed in a biologic system pre-eruption due to their impermeability and ability to attenuate UV radiation. Analysis using NanoSIMS of volcanic glass could then be performed to both deduce carbon isotope ratios that indicate biologic origin and confirm entrainment during eruption. Terrestrial contamination is one of the biggest barriers to definitive Martian organic identification in soil and rock samples. While there is a greater potential to concentrate organics in sedimentary strata, volcanic glasses may better encapsulate and preserve organics over long time scales, and are widespread on Mars. If volcanic glass from many sites on Earth could be shown to contain biologically derived organics from the original environment, there could be significant implications for the search for biomarkers in ancient Martian environments.
Automatic Co-Registration of QuickBird Data for Change Detection Applications
NASA Technical Reports Server (NTRS)
Bryant, Nevin A.; Logan, Thomas L.; Zobrist, Albert L.
2006-01-01
This viewgraph presentation reviews the use Automatic Fusion of Image Data System (AFIDS) for Automatic Co-Registration of QuickBird Data to ascertain if changes have occurred in images. The process is outlined, and views from Iraq and Los Angelels are shown to illustrate the process.
Scala, Raffaele; Maccari, Uberto; Madioni, Chiara; Venezia, Duccio; La Magra, Lidia Calogera
2015-01-01
Amyloidosis may involve the respiratory system with different clinical-radiological-functional patterns which are not always easy to be recognized. A good level of knowledge of the disease, an active integration of the pulmonologist within a multidisciplinary setting and a high level of clinical suspicion are necessary for an early diagnosis of respiratory amyloidosis. The aim of this retrospective study was to evaluate the number and the patterns of amyloidosis involving the respiratory system. We searched the cases of amyloidosis among patients attending the multidisciplinary rare and diffuse lung disease outpatients' clinic of Pulmonology Unit of the Hospital of Arezzo from 2007 to 2012. Among the 298 patients evaluated during the study period, we identified three cases of amyloidosis with involvement of the respiratory system, associated or not with other extra-thoracic localizations, whose diagnosis was histo-pathologically confirmed after the pulmonologist, the radiologist, and the pathologist evaluation. Our experience of a multidisciplinary team confirms that intra-thoracic amyloidosis is an uncommon disorder, representing 1.0% of the cases of rare and diffuse lung diseases referred to our center. The diagnosis of the disease is not always easy and quick as the amyloidosis may involve different parts of the respiratory system (airways, pleura, parenchyma). It is therefore recommended to remind this orphan disease in the differential diagnosis of the wide clinical scenarios the pulmonologist may intercept in clinical practice.
Scala, Raffaele; Maccari, Uberto; Madioni, Chiara; Venezia, Duccio; La Magra, Lidia Calogera
2015-01-01
Amyloidosis may involve the respiratory system with different clinical-radiological-functional patterns which are not always easy to be recognized. A good level of knowledge of the disease, an active integration of the pulmonologist within a multidisciplinary setting and a high level of clinical suspicion are necessary for an early diagnosis of respiratory amyloidosis. The aim of this retrospective study was to evaluate the number and the patterns of amyloidosis involving the respiratory system. We searched the cases of amyloidosis among patients attending the multidisciplinary rare and diffuse lung disease outpatients' clinic of Pulmonology Unit of the Hospital of Arezzo from 2007 to 2012. Among the 298 patients evaluated during the study period, we identified three cases of amyloidosis with involvement of the respiratory system, associated or not with other extra-thoracic localizations, whose diagnosis was histo-pathologically confirmed after the pulmonologist, the radiologist, and the pathologist evaluation. Our experience of a multidisciplinary team confirms that intra-thoracic amyloidosis is an uncommon disorder, representing 1.0% of the cases of rare and diffuse lung diseases referred to our center. The diagnosis of the disease is not always easy and quick as the amyloidosis may involve different parts of the respiratory system (airways, pleura, parenchyma). It is therefore recommended to remind this orphan disease in the differential diagnosis of the wide clinical scenarios the pulmonologist may intercept in clinical practice. PMID:26229565
NASA Technical Reports Server (NTRS)
Kocurek, Michael J.
2005-01-01
The HARVIST project seeks to automatically provide an accurate, interactive interface to predict crop yield over the entire United States. In order to accomplish this goal, large images must be quickly and automatically classified by crop type. Current trained and untrained classification algorithms, while accurate, are highly inefficient when operating on large datasets. This project sought to develop new variants of two standard trained and untrained classification algorithms that are optimized to take advantage of the spatial nature of image data. The first algorithm, harvist-cluster, utilizes divide-and-conquer techniques to precluster an image in the hopes of increasing overall clustering speed. The second algorithm, harvistSVM, utilizes support vector machines (SVMs), a type of trained classifier. It seeks to increase classification speed by applying a "meta-SVM" to a quick (but inaccurate) SVM to approximate a slower, yet more accurate, SVM. Speedups were achieved by tuning the algorithm to quickly identify when the quick SVM was incorrect, and then reclassifying low-confidence pixels as necessary. Comparing the classification speeds of both algorithms to known baselines showed a slight speedup for large values of k (the number of clusters) for harvist-cluster, and a significant speedup for harvistSVM. Future work aims to automate the parameter tuning process required for harvistSVM, and further improve classification accuracy and speed. Additionally, this research will move documents created in Canvas into ArcGIS. The launch of the Mars Reconnaissance Orbiter (MRO) will provide a wealth of image data such as global maps of Martian weather and high resolution global images of Mars. The ability to store this new data in a georeferenced format will support future Mars missions by providing data for landing site selection and the search for water on Mars.
Self-navigation of a scanning tunneling microscope tip toward a micron-sized graphene sample.
Li, Guohong; Luican, Adina; Andrei, Eva Y
2011-07-01
We demonstrate a simple capacitance-based method to quickly and efficiently locate micron-sized conductive samples, such as graphene flakes, on insulating substrates in a scanning tunneling microscope (STM). By using edge recognition, the method is designed to locate and to identify small features when the STM tip is far above the surface, allowing for crash-free search and navigation. The method can be implemented in any STM environment, even at low temperatures and in strong magnetic field, with minimal or no hardware modifications.
Fingerprint-Based Structure Retrieval Using Electron Density
Yin, Shuangye; Dokholyan, Nikolay V.
2010-01-01
We present a computational approach that can quickly search a large protein structural database to identify structures that fit a given electron density, such as determined by cryo-electron microscopy. We use geometric invariants (fingerprints) constructed using 3D Zernike moments to describe the electron density, and reduce the problem of fitting of the structure to the electron density to simple fingerprint comparison. Using this approach, we are able to screen the entire Protein Data Bank and identify structures that fit two experimental electron densities determined by cryo-electron microscopy. PMID:21287628
Fingerprint-based structure retrieval using electron density.
Yin, Shuangye; Dokholyan, Nikolay V
2011-03-01
We present a computational approach that can quickly search a large protein structural database to identify structures that fit a given electron density, such as determined by cryo-electron microscopy. We use geometric invariants (fingerprints) constructed using 3D Zernike moments to describe the electron density, and reduce the problem of fitting of the structure to the electron density to simple fingerprint comparison. Using this approach, we are able to screen the entire Protein Data Bank and identify structures that fit two experimental electron densities determined by cryo-electron microscopy. Copyright © 2010 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Fan, Hong; Zhu, Anfeng; Zhang, Weixia
2015-12-01
In order to meet the rapid positioning of 12315 complaints, aiming at the natural language expression of telephone complaints, a semantic retrieval framework is proposed which is based on natural language parsing and geographical names ontology reasoning. Among them, a search result ranking and recommended algorithms is proposed which is regarding both geo-name conceptual similarity and spatial geometry relation similarity. The experiments show that this method can assist the operator to quickly find location of 12,315 complaints, increased industry and commerce customer satisfaction.
Site characterization and analysis penetrometer system
NASA Astrophysics Data System (ADS)
Heath, Jeff
1995-04-01
The site characterization and analysis penetrometer system (SCAPS) with laser induced fluorescence (LIF) sensors is being demonstrated as a quick field screening technique to determine the physical and chemical characteristics of subsurface soil and contaminants at hazardous waste sites SCAPS is a collaborative development effort of the Navy, Army, and Air Force under the Tri-Service SCAPS Program. The current SCAPS configuration is designed to quickly and cost-effectively distinguish areas contaminated with petroleum products (hydrocarbons) from unaffected areas.
Control Scheme for Quickly Starting X-ray Tube
NASA Astrophysics Data System (ADS)
Nakahama, Masayuki; Nakanishi, Toshiki; Ishitobi, Manabu; Ito, Tuyoshi; Hosoda, Kenichi
A control scheme for quickly starting a portable X-ray generator used in the livestock industry is proposed in this paper. A portable X-ray generator used to take X-ray images of animals such as horses, sheep and dogs should be capable of starting quickly because it is difficult for veterinarians to take X-ray images of animals at their timing. In order to develop a scheme for starting the X-ray tube quickly, it is necessary to analysis the X-ray tube. However, such an analysis has not been discussed until now. First, the states of an X-ray tube are classified into the temperature-limited state and the space-charge-limited state. Furthermore, existence of “mixed state” that comprises both is newly proposed in this paper. From these analyses, a novel scheme for quickly starting an X-ray generator is proposed; this scheme is considered with the characteristics of the X-ray tube. The proposed X-ray system that is capable of starting quickly is evaluated on the basis of experimental results.
Hayward, Douglas G; Wong, Jon W; Zhang, Kai; Chang, James; Shi, Feng; Banerjee, Kaushik; Yang, Paul
2011-01-01
Five different mass spectrometers interfaced to GC or LC were evaluated for their application to targeted and nontargeted screening of pesticides in two foods, spinach and ginseng. The five MS systems were capillary GC/MS/MS, GC-high resolution time-of-flight (GC/HR-TOF)-MS, TOF-MS interfaced with a comprehensive multidimensional GC (GCxGC/TOF-MS), an MS/MS ion trap hybrid mass (qTrap) system interfaced with an ultra-performance liquid chromatograph (UPLC-qTrap), and UPLC interfaced to an orbital trap high resolution mass spectrometer (UPLC/Orbitrap HR-MS). Each MS system was tested with spinach and ginseng extracts prepared through a modified quick, easy, cheap, effective, rugged, and safe (QuEChERS) procedure. Each matrix was fortified at 10 and 50 ng/g for spinach or 25 and 100 ng/g for ginseng with subsets of 486 pesticides, isomers, and metabolites representing most pesticide classes. HR-TOF-MS was effective in a targeted search for characteristic accurate mass ions and identified 97% of 170 pesticides in ginseng at 25 ng/g. A targeted screen of either ginseng or spinach found 94-95% of pesticides fortified for analysis at 10 ng/g with GC/MS/MS or LC/MS/MS using multiple reaction monitoring (MRM) procedures. Orbitrap-MS successfully found 89% of 177 fortified pesticides in spinach at 25 ng/g using a targeted search of accurate mass pseudomolecular ions in the positive electrospray ionization mode. A comprehensive GCxGC/TOF-MS system provided separation and identification of 342 pesticides and metabolites in a single 32 min acquisition with standards. Only 67 or 81% of the pesticides were identified in ginseng and spinach matrixes at 25 ng/g or 10 ng/g, respectively. MS/MS or qTrap-MS operated in the MRM mode produced the lowest false-negative rates, at 10 ng/g. Improvements to instrumentation, methods, and software are needed for efficient use of nontargeted screens in parallel with triple quadrupole MS.
Exposing the Strategies that can Reduce the Obstacles: Improving the Science User Experience
NASA Astrophysics Data System (ADS)
Lindsay, F. E.; Brennan, J.; Behnke, J.; Lynnes, C.
2017-12-01
It is now well established that pursuing generic solutions to what seem are common problems in Earth science data access and use can often lead to disappointing results for both system developers and the intended users. This presentation focuses on real-world experience of managing a large and complex data system, NASA's Earth Science Data and Information Science System (EOSDIS), whose mission is to serve both broad user communities and those in smaller niche applications of Earth science data and services. In the talk, we focus on our experiences with known data user obstacles characterizing EOSDIS approaches, including various technological techniques, for engaging and bolstering, where possible, user experiences with EOSDIS. For improving how existing and prospective users discover and access NASA data from EOSDIS we introduce our cross-archive tool: Earthdata Search. This new search and order tool further empowers users to quickly access data sets using clever and intuitive features. The Worldview data visualization tool is also discussed highlighting how many users are now performing extensive data exploration without necessarily downloading data. Also, we explore our EOSDIS data discovery and access webinars, data recipes and short tutorials, targeted technical and data publications, user profiles and and social media as additional tools and methods used for improving our outreach and communications to a diverse user community. These efforts have paid substantial dividends for our user communities by allowing us to target discipline specific community needs. The desired take-away from this presentation will be an improved understanding of how EOSDIS has approached, and in several instances achieved, removing or lowering the barriers to data access and use. As we look ahead to more complex Earth science missions, EOSDIS will continue to focus on our user communities, both broad and specialized, so that our overall data system can continue to serve the needs of science and applications users.
Exposing the Strategies that Can Reduce the Obstacles: Improving the Science User Experience
NASA Technical Reports Server (NTRS)
Lindsay, Francis E.; Brennan, Jennifer; Behnke, Jeanne; Lynnes, Chris
2017-01-01
It is now well established that pursuing generic solutions to what seem are common problems in Earth science data access and use can often lead to disappointing results for both system developers and the intended users. This presentation focuses on real-world experience of managing a large and complex data system, NASAs Earth Science Data and Information Science System (EOSDIS), whose mission is to serve both broad user communities and those in smaller niche applications of Earth science data and services. In the talk, we focus on our experiences with known data user obstacles characterizing EOSDIS approaches, including various technological techniques, for engaging and bolstering, where possible, user experiences with EOSDIS. For improving how existing and prospective users discover and access NASA data from EOSDIS we introduce our cross-archive tool: Earthdata Search. This new search and order tool further empowers users to quickly access data sets using clever and intuitive features. The Worldview data visualization tool is also discussed highlighting how many users are now performing extensive data exploration without necessarily downloading data. Also, we explore our EOSDIS data discovery and access webinars, data recipes and short tutorials, targeted technical and data publications, user profiles and social media as additional tools and methods used for improving our outreach and communications to a diverse user community. These efforts have paid substantial dividends for our user communities by allowing us to target discipline specific community needs. The desired take-away from this presentation will be an improved understanding of how EOSDIS has approached, and in several instances achieved, removing or lowering the barriers to data access and use. As we look ahead to more complex Earth science missions, EOSDIS will continue to focus on our user communities, both broad and specialized, so that our overall data system can continue to serve the needs of science and applications users.