Sample records for computer search service

  1. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 39 Postal Service 1 2013-07-01 2013-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  2. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 39 Postal Service 1 2012-07-01 2012-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  3. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 39 Postal Service 1 2011-07-01 2011-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  4. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 39 Postal Service 1 2014-07-01 2014-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  5. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  6. Ubiquitous Computing Services Discovery and Execution Using a Novel Intelligent Web Services Algorithm

    PubMed Central

    Choi, Okkyung; Han, SangYong

    2007-01-01

    Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.

  7. A survey of computer search service costs in the academic health sciences library.

    PubMed Central

    Shirley, S

    1978-01-01

    The Norris Medical Library, University of Southern California, has recently completed an extensive survey of costs involved in the provision of computer search services beyond vendor charges for connect time and printing. In this survey costs for such items as terminal depreciation, repair contract, personnel time, and supplies are analyzed. Implications of this cost survey are discussed in relation to planning and price setting for computer search services. PMID:708953

  8. Growth Dynamics of Information Search Services.

    ERIC Educational Resources Information Center

    Lindqvist, Mats

    Computer based information search services, ISS's, of the type that provide on-line literature searches are analyzed from a system's viewpoint using a continuous simulation model. The analysis shows that the observed growth and stagnation of a typical ISS can be explained as a natural consequence of market responses to the service together with a…

  9. Privacy-Aware Relevant Data Access with Semantically Enriched Search Queries for Untrusted Cloud Storage Services.

    PubMed

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Lee, Sungyoung; Chung, Tae Choong

    2016-01-01

    Privacy-aware search of outsourced data ensures relevant data access in the untrusted domain of a public cloud service provider. Subscriber of a public cloud storage service can determine the presence or absence of a particular keyword by submitting search query in the form of a trapdoor. However, these trapdoor-based search queries are limited in functionality and cannot be used to identify secure outsourced data which contains semantically equivalent information. In addition, trapdoor-based methodologies are confined to pre-defined trapdoors and prevent subscribers from searching outsourced data with arbitrarily defined search criteria. To solve the problem of relevant data access, we have proposed an index-based privacy-aware search methodology that ensures semantic retrieval of data from an untrusted domain. This method ensures oblivious execution of a search query and leverages authorized subscribers to model conjunctive search queries without relying on predefined trapdoors. A security analysis of our proposed methodology shows that, in a conspired attack, unauthorized subscribers and untrusted cloud service providers cannot deduce any information that can lead to the potential loss of data privacy. A computational time analysis on commodity hardware demonstrates that our proposed methodology requires moderate computational resources to model a privacy-aware search query and for its oblivious evaluation on a cloud service provider.

  10. Privacy-Aware Relevant Data Access with Semantically Enriched Search Queries for Untrusted Cloud Storage Services

    PubMed Central

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Lee, Sungyoung; Chung, Tae Choong

    2016-01-01

    Privacy-aware search of outsourced data ensures relevant data access in the untrusted domain of a public cloud service provider. Subscriber of a public cloud storage service can determine the presence or absence of a particular keyword by submitting search query in the form of a trapdoor. However, these trapdoor-based search queries are limited in functionality and cannot be used to identify secure outsourced data which contains semantically equivalent information. In addition, trapdoor-based methodologies are confined to pre-defined trapdoors and prevent subscribers from searching outsourced data with arbitrarily defined search criteria. To solve the problem of relevant data access, we have proposed an index-based privacy-aware search methodology that ensures semantic retrieval of data from an untrusted domain. This method ensures oblivious execution of a search query and leverages authorized subscribers to model conjunctive search queries without relying on predefined trapdoors. A security analysis of our proposed methodology shows that, in a conspired attack, unauthorized subscribers and untrusted cloud service providers cannot deduce any information that can lead to the potential loss of data privacy. A computational time analysis on commodity hardware demonstrates that our proposed methodology requires moderate computational resources to model a privacy-aware search query and for its oblivious evaluation on a cloud service provider. PMID:27571421

  11. 49 CFR 360.1 - Fees for records search, review, copying, certification, and related services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Certificate of the Director, Office of Data Analysis and Information Systems, as to the authenticity of... request for ADP data. (2) The fee for computer searches will be set at the current rate for computer service. Information on those charges can be obtained from the Office of Data Analysis and Information...

  12. The Dynamics of Information Search Services.

    ERIC Educational Resources Information Center

    Lindquist, Mats G.

    Computer-based information search services (ISSs) of the type that provide online literature searches are analyzed from a systems viewpoint using a continuous simulation model. The methodology applied is "system dynamics," and the system language is DYNAMO. The analysis reveals that the observed growth and stagnation of a typical ISS can…

  13. A Fine-Grained and Privacy-Preserving Query Scheme for Fog Computing-Enhanced Location-Based Service

    PubMed Central

    Yin, Fan; Tang, Xiaohu

    2017-01-01

    Location-based services (LBS), as one of the most popular location-awareness applications, has been further developed to achieve low-latency with the assistance of fog computing. However, privacy issues remain a research challenge in the context of fog computing. Therefore, in this paper, we present a fine-grained and privacy-preserving query scheme for fog computing-enhanced location-based services, hereafter referred to as FGPQ. In particular, mobile users can obtain the fine-grained searching result satisfying not only the given spatial range but also the searching content. Detailed privacy analysis shows that our proposed scheme indeed achieves the privacy preservation for the LBS provider and mobile users. In addition, extensive performance analyses and experiments demonstrate that the FGPQ scheme can significantly reduce computational and communication overheads and ensure the low-latency, which outperforms existing state-of-the art schemes. Hence, our proposed scheme is more suitable for real-time LBS searching. PMID:28696395

  14. A Fine-Grained and Privacy-Preserving Query Scheme for Fog Computing-Enhanced Location-Based Service.

    PubMed

    Yang, Xue; Yin, Fan; Tang, Xiaohu

    2017-07-11

    Location-based services (LBS), as one of the most popular location-awareness applications, has been further developed to achieve low-latency with the assistance of fog computing. However, privacy issues remain a research challenge in the context of fog computing. Therefore, in this paper, we present a fine-grained and privacy-preserving query scheme for fog computing-enhanced location-based services, hereafter referred to as FGPQ. In particular, mobile users can obtain the fine-grained searching result satisfying not only the given spatial range but also the searching content. Detailed privacy analysis shows that our proposed scheme indeed achieves the privacy preservation for the LBS provider and mobile users. In addition, extensive performance analyses and experiments demonstrate that the FGPQ scheme can significantly reduce computational and communication overheads and ensure the low-latency, which outperforms existing state-of-the art schemes. Hence, our proposed scheme is more suitable for real-time LBS searching.

  15. Explorative search of distributed bio-data to answer complex biomedical questions

    PubMed Central

    2014-01-01

    Background The huge amount of biomedical-molecular data increasingly produced is providing scientists with potentially valuable information. Yet, such data quantity makes difficult to find and extract those data that are most reliable and most related to the biomedical questions to be answered, which are increasingly complex and often involve many different biomedical-molecular aspects. Such questions can be addressed only by comprehensively searching and exploring different types of data, which frequently are ordered and provided by different data sources. Search Computing has been proposed for the management and integration of ranked results from heterogeneous search services. Here, we present its novel application to the explorative search of distributed biomedical-molecular data and the integration of the search results to answer complex biomedical questions. Results A set of available bioinformatics search services has been modelled and registered in the Search Computing framework, and a Bioinformatics Search Computing application (Bio-SeCo) using such services has been created and made publicly available at http://www.bioinformatics.deib.polimi.it/bio-seco/seco/. It offers an integrated environment which eases search, exploration and ranking-aware combination of heterogeneous data provided by the available registered services, and supplies global results that can support answering complex multi-topic biomedical questions. Conclusions By using Bio-SeCo, scientists can explore the very large and very heterogeneous biomedical-molecular data available. They can easily make different explorative search attempts, inspect obtained results, select the most appropriate, expand or refine them and move forward and backward in the construction of a global complex biomedical query on multiple distributed sources that could eventually find the most relevant results. Thus, it provides an extremely useful automated support for exploratory integrated bio search, which is fundamental for Life Science data driven knowledge discovery. PMID:24564278

  16. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  17. Growth Dynamics of Information Search Services

    ERIC Educational Resources Information Center

    Lindquist, Mats G.

    1978-01-01

    An analysis of computer-based search services (ISSs) from a system's viewpoint, using a continuous simulation model to reveal growth and stagnation of a typical system is presented, as well as an analysis of decision making for an ISS. (Author/MBR)

  18. Secure Genomic Computation through Site-Wise Encryption

    PubMed Central

    Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu

    2015-01-01

    Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients’ genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds. PMID:26306278

  19. Secure Genomic Computation through Site-Wise Encryption.

    PubMed

    Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu

    2015-01-01

    Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients' genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds.

  20. Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*

    PubMed Central

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.

    2015-01-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363

  1. Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.

    PubMed

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L

    2015-02-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  2. OS2: Oblivious similarity based searching for encrypted data outsourced to an untrusted domain

    PubMed Central

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Ramzan, Naeem

    2017-01-01

    Public cloud storage services are becoming prevalent and myriad data sharing, archiving and collaborative services have emerged which harness the pay-as-you-go business model of public cloud. To ensure privacy and confidentiality often encrypted data is outsourced to such services, which further complicates the process of accessing relevant data by using search queries. Search over encrypted data schemes solve this problem by exploiting cryptographic primitives and secure indexing to identify outsourced data that satisfy the search criteria. Almost all of these schemes rely on exact matching between the encrypted data and search criteria. A few schemes which extend the notion of exact matching to similarity based search, lack realism as those schemes rely on trusted third parties or due to increase storage and computational complexity. In this paper we propose Oblivious Similarity based Search (OS2) for encrypted data. It enables authorized users to model their own encrypted search queries which are resilient to typographical errors. Unlike conventional methodologies, OS2 ranks the search results by using similarity measure offering a better search experience than exact matching. It utilizes encrypted bloom filter and probabilistic homomorphic encryption to enable authorized users to access relevant data without revealing results of search query evaluation process to the untrusted cloud service provider. Encrypted bloom filter based search enables OS2 to reduce search space to potentially relevant encrypted data avoiding unnecessary computation on public cloud. The efficacy of OS2 is evaluated on Google App Engine for various bloom filter lengths on different cloud configurations. PMID:28692697

  3. [Formula: see text]: Oblivious similarity based searching for encrypted data outsourced to an untrusted domain.

    PubMed

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Ramzan, Naeem; Khan, Wajahat Ali

    2017-01-01

    Public cloud storage services are becoming prevalent and myriad data sharing, archiving and collaborative services have emerged which harness the pay-as-you-go business model of public cloud. To ensure privacy and confidentiality often encrypted data is outsourced to such services, which further complicates the process of accessing relevant data by using search queries. Search over encrypted data schemes solve this problem by exploiting cryptographic primitives and secure indexing to identify outsourced data that satisfy the search criteria. Almost all of these schemes rely on exact matching between the encrypted data and search criteria. A few schemes which extend the notion of exact matching to similarity based search, lack realism as those schemes rely on trusted third parties or due to increase storage and computational complexity. In this paper we propose Oblivious Similarity based Search ([Formula: see text]) for encrypted data. It enables authorized users to model their own encrypted search queries which are resilient to typographical errors. Unlike conventional methodologies, [Formula: see text] ranks the search results by using similarity measure offering a better search experience than exact matching. It utilizes encrypted bloom filter and probabilistic homomorphic encryption to enable authorized users to access relevant data without revealing results of search query evaluation process to the untrusted cloud service provider. Encrypted bloom filter based search enables [Formula: see text] to reduce search space to potentially relevant encrypted data avoiding unnecessary computation on public cloud. The efficacy of [Formula: see text] is evaluated on Google App Engine for various bloom filter lengths on different cloud configurations.

  4. Use of PL/1 in a Bibliographic Information Retrieval System.

    ERIC Educational Resources Information Center

    Schipma, Peter B.; And Others

    The Information Sciences section of ITT Research Institute (IITRI) has developed a Computer Search Center and is currently conducting a research project to explore computer searching of a variety of machine-readable data bases. The Center provides Selective Dissemination of Information services to academic, industrial and research organizations…

  5. 36 CFR 1250.56 - Fee schedule for NARA operational records.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., the rate is $33 per hour (or fraction thereof) (2) Computer searching. This is the actual cost to NARA of operating the computer and the salary of the operator. When the search is relatively... issues regarding the application of exemptions. (c) Reproduction fees—(1) Self-service photocopying. At...

  6. 36 CFR 1250.56 - Fee schedule for NARA operational records.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., the rate is $33 per hour (or fraction thereof) (2) Computer searching. This is the actual cost to NARA of operating the computer and the salary of the operator. When the search is relatively... issues regarding the application of exemptions. (c) Reproduction fees—(1) Self-service photocopying. At...

  7. 36 CFR 1250.56 - Fee schedule for NARA operational records.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., the rate is $33 per hour (or fraction thereof) (2) Computer searching. This is the actual cost to NARA of operating the computer and the salary of the operator. When the search is relatively... issues regarding the application of exemptions. (c) Reproduction fees—(1) Self-service photocopying. At...

  8. 19 CFR 103.10 - Fees for services.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... each hour or fraction thereof. If a computer search is required because of the nature of the records... based on computer time and supplies necessary to comply with the request. (4) Searches requiring travel...

  9. Online literature-retrieval systems: how to get started.

    PubMed

    Tousignaut, D R

    1983-02-01

    Basic information describing online literature-retrieval systems is presented; the power of online searching is also discussed. The equipment, expense involved, and training necessary to perform online searching efficiently is described. An individual searcher needs only a computer terminal and a telephone; by telephone, the searcher connects with an online vendor's computer at another location. The four major U.S. vendors (Dialog, Bibliographic Retrieval Services, Systems Development Corporation, and the National Library of Medicine) are compared. A step-by-step procedure of logging in and searching is presented. Using the International Pharmaceutical Abstracts database as an example, 17 access points to locating an article via an online system are compared with only two (the subject and author index entry) of a printed service. By searching online, one can search the published literature on a specific topic in a matter of minutes. An online search is very useful when limited information is available or the search question contains a term that is not in a printed index.

  10. COMPENDEX/TEXT-PAC: RETROSPECTIVE SEARCH.

    ERIC Educational Resources Information Center

    Standera, Oldrich

    The Text-Pac System is capable of generating indexes and bulletins to provide a current information service without the selectivity feature. Indexes of the accumulated data base may also be used as a basis for manual retrospective searching. The manual search involves searching computer-prepared indexes from a machine readable data base produced…

  11. Online Secondary Research in the Advertising Research Class: A Friendly Introduction to Computing.

    ERIC Educational Resources Information Center

    Adler, Keith

    In an effort to promote computer literacy among advertising students, an assignment was devised that required the use of online database search techniques to find secondary research materials. The search program, chosen for economical reasons, was "Classroom Instruction Program" offered by Dialog Information Services. Available for a…

  12. A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.

    PubMed

    Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P

    2014-09-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  13. Library Searching: An Industrial User's Viewpoint.

    ERIC Educational Resources Information Center

    Hendrickson, W. A.

    1982-01-01

    Discusses library searching of chemical literature from an industrial user's viewpoint, focusing on differences between academic and industrial researcher's searching techniques of the same problem area. Indicates that industry users need more exposure to patents, work with abstracting services and continued improvement in computer searching…

  14. The ALL-OUT Library; A Design for Computer-Powered, Multidimensional Services.

    ERIC Educational Resources Information Center

    Sleeth, Jim; LaRue, James

    1983-01-01

    Preliminary description of design of electronic library and home information delivery system highlights potentials of personal computer interface program (applying for service, assuring that users are valid, checking for measures, searching, locating titles) and incorporation of concepts used in other information systems (security checks,…

  15. 49 CFR 7.44 - Services performed without charge or at a reduced charge.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... charged to any requestor making a request under subpart C of this part for the first two hours of search... search is required two hours of search time will be considered spent when the hourly costs of operating the central processing unit used to perform the search added to the computer operator's salary cost...

  16. 49 CFR 7.44 - Services performed without charge or at a reduced charge.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... charged to any requestor making a request under subpart C of this part for the first two hours of search... search is required two hours of search time will be considered spent when the hourly costs of operating the central processing unit used to perform the search added to the computer operator's salary cost...

  17. 49 CFR 7.44 - Services performed without charge or at a reduced charge.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... charged to any requestor making a request under subpart C of this part for the first two hours of search... search is required two hours of search time will be considered spent when the hourly costs of operating the central processing unit used to perform the search added to the computer operator's salary cost...

  18. Reference Revolutions.

    ERIC Educational Resources Information Center

    Mason, Marilyn Gell

    1998-01-01

    Describes developments in Online Computer Library Center (OCLC) electronic reference services. Presents a background on networked cataloging and the initial implementation of reference services by OCLC. Discusses the introduction of OCLC FirstSearch service, which today offers access to over 65 databases, future developments in integrated…

  19. A City Parking Integration System Combined with Cloud Computing Technologies and Smart Mobile Devices

    ERIC Educational Resources Information Center

    Yeh, Her-Tyan; Chen, Bing-Chang; Wang, Bo-Xun

    2016-01-01

    The current study applied cloud computing technology and smart mobile devices combined with a streaming server for parking lots to plan a city parking integration system. It is also equipped with a parking search system, parking navigation system, parking reservation service, and car retrieval service. With this system, users can quickly find…

  20. The Versatile Terminal.

    ERIC Educational Resources Information Center

    Evans, C. D.

    This paper describes the experiences of the industrial research laboratory of Kodak Ltd. in finding and providing a computer terminal most suited to its very varied requirements. These requirements include bibliographic and scientific data searching and access to a number of worldwide computing services for scientific computing work. The provision…

  1. Low Cost, Scalable Proteomics Data Analysis Using Amazon's Cloud Computing Services and Open Source Search Algorithms

    PubMed Central

    Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.

    2009-01-01

    One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578

  2. Low cost, scalable proteomics data analysis using Amazon's cloud computing services and open source search algorithms.

    PubMed

    Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N

    2009-06-01

    One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).

  3. Privacy Perspectives for Online Searchers: Confidentiality with Confidence?

    ERIC Educational Resources Information Center

    Duberman, Josh; Beaudet, Michael

    2000-01-01

    Presents issues and questions involved in online privacy from the information professional's perspective. Topics include consumer concerns; query confidentiality; securing computers from intrusion; electronic mail; search engines; patents and intellectual property searches; government's role; Internet service providers; database mining; user…

  4. Engineering calculations for communications satellite systems planning

    NASA Technical Reports Server (NTRS)

    Levis, C. A.; Martin, C. H.; Reilly, C. H.; Gonsalvez, D. J.; Yamaura, Y.

    1985-01-01

    An extended gradient search code for broadcasting satellite service (BSS) spectrum/orbit assignment synthesis is discussed. Progress is also reported on both single-entry and full synthesis computational aids for fixed satellite service (FSS) spectrum/orbit assignment purposes.

  5. 12 CFR 1703.38 - Fees.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... to OFHEO and upon the particular types of computer and associated equipment and the amount of time... format other than paper copy, such as in the form of computer tapes and disks, OFHEO will assess the... charge $3.00 for each certification or authentication of documents. (d) Computer searches. Services of...

  6. Community Information Centers and the Computer.

    ERIC Educational Resources Information Center

    Carroll, John M.; Tague, Jean M.

    Two computer data bases have been developed by the Computer Science Department at the University of Western Ontario for "Information London," the local community information center. One system, called LONDON, permits Boolean searches of a file of 5,000 records describing human service agencies in the London area. The second system,…

  7. The Job Search Goes Computer.

    ERIC Educational Resources Information Center

    Kennedy, Joyce Lain

    1994-01-01

    Discusses significant new developments in the electronic search process: (1) New Government Automation; (2) New Federal Initiatives; (3) New Telecommunications Services; (4) Campus Data Bases; (5) Off-Campus Data Bases; (6) Faxed or E-Mailed Resumes; (7) Automation of 3rd-Party Recruiters; (8) New Cyberservices; (9) Interview-Prep Software; (10)…

  8. Service Oriented Architecture for Coast Guard Command and Control

    DTIC Science & Technology

    2007-03-01

    Operations BPEL4WS The Business Process Execution Language for Web Services BPMN Business Process Modeling Notation CASP Computer Aided Search Planning...Business Process Modeling Notation ( BPMN ) provides a standardized graphical notation for drawing business processes in a workflow. Software tools

  9. Cloud4Psi: cloud computing for 3D protein structure similarity searching.

    PubMed

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur

    2014-10-01

    Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. © The Author 2014. Published by Oxford University Press.

  10. Cloud4Psi: cloud computing for 3D protein structure similarity searching

    PubMed Central

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur

    2014-01-01

    Summary: Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Availability and implementation: Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. Contact: dariusz.mrozek@polsl.pl PMID:24930141

  11. 12 CFR 1703.38 - Fees.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... only upon a showing of demonstrated need. (c) Certification or authentication of documents. OFHEO will charge $3.00 for each certification or authentication of documents. (d) Computer searches. Services of... to OFHEO and upon the particular types of computer and associated equipment and the amount of time...

  12. A Question of Interface Design: How Do Online Service GUIs Measure Up?

    ERIC Educational Resources Information Center

    Head, Alison J.

    1997-01-01

    Describes recent improvements in graphical user interfaces (GUIs) offered by online services. Highlights include design considerations, including computer engineering capabilities and users' abilities; fundamental GUI design principles; user empowerment; visual communication and interaction; and an evaluation of online search interfaces. (LRW)

  13. Study on the application of mobile internet cloud computing platform

    NASA Astrophysics Data System (ADS)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  14. Technology and Transformation in Academic Libraries.

    ERIC Educational Resources Information Center

    Shaw, Ward

    Academic library computing systems, which are among the most complex found in academic environments, now include external systems, such as online commercial search services and nationwide networks, and local systems that control and support internal operations. As librarians have realized the benefit of using computer systems to perform…

  15. Have You Got What It Takes...And Are You Using All You Could?

    ERIC Educational Resources Information Center

    Dyer, Hilary

    1994-01-01

    Suggests ways of using personal computers in special libraries, including online searching; CD-ROM networks; reference work; current awareness services; press cuttings services; selective dissemination of information; local databases; object linking and embedding; cataloging; acquisitions; circulation; serials control; interlibrary loan; space…

  16. Design and Implementation of a Threaded Search Engine for Tour Recommendation Systems

    NASA Astrophysics Data System (ADS)

    Lee, Junghoon; Park, Gyung-Leen; Ko, Jin-Hee; Shin, In-Hye; Kang, Mikyung

    This paper implements a threaded scan engine for the O(n!) search space and measures its performance, aiming at providing a responsive tour recommendation and scheduling service. As a preliminary step of integrating POI ontology, mobile object database, and personalization profile for the development of new vehicular telematics services, this implementation can give a useful guideline to design a challenging and computation-intensive vehicular telematics service. The implemented engine allocates the subtree to the respective threads and makes them run concurrently exploiting the primitives provided by the operating system and the underlying multiprocessor architecture. It also makes it easy to add a variety of constraints, for example, the search tree is pruned if the cost of partial allocation already exceeds the current best. The performance measurement result shows that the service can run even in the low-power telematics device when the number of destinations does not exceed 15, with an appropriate constraint processing.

  17. Monitoring performance of a highly distributed and complex computing infrastructure in LHCb

    NASA Astrophysics Data System (ADS)

    Mathe, Z.; Haen, C.; Stagni, F.

    2017-10-01

    In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time series of a large number of observables, for which the usage of SQL relational databases is far from optimal. Therefore within DIRAC we have been studying novel possibilities based on NoSQL databases (ElasticSearch, OpenTSDB and InfluxDB) as a result of this study we developed a new monitoring system based on ElasticSearch. It has been deployed on the LHCb Distributed Computing infrastructure for which it collects data from all the components (agents, services, jobs) and allows creating reports through Kibana and a web user interface, which is based on the DIRAC web framework. In this paper we describe this new implementation of the DIRAC monitoring system. We give details on the ElasticSearch implementation within the DIRAC general framework, as well as an overview of the advantages of the pipeline aggregation used for creating a dynamic bucketing of the time series. We present the advantages of using the ElasticSearch DSL high-level library for creating and running queries. Finally we shall present the performances of that system.

  18. Schools (Students) Exchanging CAD/CAM Files over the Internet.

    ERIC Educational Resources Information Center

    Mahoney, Gary S.; Smallwood, James E.

    This document discusses how students and schools can benefit from exchanging computer-aided design/computer-aided manufacturing (CAD/CAM) files over the Internet, explains how files are exchanged, and examines the problem of selected hardware/software incompatibility. Key terms associated with information search services are defined, and several…

  19. Description of 'REQUEST-KYUSHYU' for KYUKEICHO regional data base

    NASA Astrophysics Data System (ADS)

    Takimoto, Shin'ichi

    Kyushu Economic Research Association (a foundational juridical person) initiated the regional database services, ' REQUEST-Kyushu ' recently. It is the full scale databases compiled based on the information and know-hows which the Association has accumulated over forty years. It covers the regional information database for journal and newspaper articles, and statistical information database for economic statistics. As to the former database it is searched on a personal computer and then a search result (original text) is sent through a facsimile. As to the latter, it is also searched on a personal computer where the data is processed, edited or downloaded. This paper describes characteristics, content and the system outline of 'REQUEST-Kyushu'.

  20. Software Framework for Peer Data-Management Services

    NASA Technical Reports Server (NTRS)

    Hughes, John; Hardman, Sean; Crichton, Daniel; Hyon, Jason; Kelly, Sean; Tran, Thuy

    2007-01-01

    Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-to-peer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.

  1. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud

    PubMed Central

    Munisamy, Shyamala Devi; Chokkalingam, Arun

    2015-01-01

    Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider's premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization. PMID:26380364

  2. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud.

    PubMed

    Munisamy, Shyamala Devi; Chokkalingam, Arun

    2015-01-01

    Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider's premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  3. A Comparison of Costs of Searching the Machine-Readable Data Bases ERIC and "Psychological Abstracts" in an Annual Subscription Rate System Against Costs Estimated for the Same Searches Done in the Lockheed DIALOG System and the System Development Corporation for ERIC, and the Lockheed DIALOG System and PASAT for "Psychological Abstracts."

    ERIC Educational Resources Information Center

    Palmer, Crescentia

    A comparison of costs for computer-based searching of Psychological Abstracts and Educational Resources Information Center (ERIC) systems by the New York State Library at Albany was produced by combining data available from search request forms and from bills from the contract subscription service, the State University of New…

  4. NASIC at MIT. Final Report, 1 March 1974 through 28 February 1975.

    ERIC Educational Resources Information Center

    Benenfeld, Alan R.; And Others

    Computer-based reference search services were provided to users on a fee-for-service basis at the Massachusetts Institute of Technology as the first, and experimental, note in the development of the Northeast Academic Science Information Center (NASIC). Development of a training program for information specialists and training materials is…

  5. Globus | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Globus software services provide secure cancer research data transfer, synchronization, and sharing in distributed environments at large scale. These services can be integrated into applications and research data gateways, leveraging Globus identity management, single sign-on, search, and authorization capabilities. Globus Genomics integrates Globus with the Galaxy genomics workflow engine and Amazon Web Services to enable cancer genomics analysis that can elastically scale compute resources with demand.

  6. Generic-distributed framework for cloud services marketplace based on unified ontology.

    PubMed

    Hasan, Samer; Valli Kumari, V

    2017-11-01

    Cloud computing is a pattern for delivering ubiquitous and on demand computing resources based on pay-as-you-use financial model. Typically, cloud providers advertise cloud service descriptions in various formats on the Internet. On the other hand, cloud consumers use available search engines (Google and Yahoo) to explore cloud service descriptions and find the adequate service. Unfortunately, general purpose search engines are not designed to provide a small and complete set of results, which makes the process a big challenge. This paper presents a generic-distrusted framework for cloud services marketplace to automate cloud services discovery and selection process, and remove the barriers between service providers and consumers. Additionally, this work implements two instances of generic framework by adopting two different matching algorithms; namely dominant and recessive attributes algorithm borrowed from gene science and semantic similarity algorithm based on unified cloud service ontology. Finally, this paper presents unified cloud services ontology and models the real-life cloud services according to the proposed ontology. To the best of the authors' knowledge, this is the first attempt to build a cloud services marketplace where cloud providers and cloud consumers can trend cloud services as utilities. In comparison with existing work, semantic approach reduced the execution time by 20% and maintained the same values for all other parameters. On the other hand, dominant and recessive attributes approach reduced the execution time by 57% but showed lower value for recall.

  7. Broadcasting satellite service synthesis using gradient and cyclic coordinate search procedures

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J.; Martin, C. H.; Levis, C. A.; Wang, C. W.

    1986-01-01

    Two search techniques are considered for solving satellite synthesis problems. Neither is likely to find a globally optimal solution. In order to determine which method performs better and what factors affect their performance, we design an experiment and solve the same problem under a variety of starting solution configuration-algorithm combinations. Since there is no randomization in the experiment, we present results of practical, rather than statistical, significance. Our implementation of a cyclic coordinate search procedure clearly finds better synthesis solutions than our implementation of a gradient search procedure does with our objective of maximizing the minimum C/I ratio computed at test points on the perimeters of the intended service areas. The length of the available orbital arc and the configuration of the starting solution are shown to affect the quality of the solutions found.

  8. Broadcasting satellite service synthesis using gradient and cyclic coordinate search procedures

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J.; Martin, C. H.; Levis, C. A.

    1986-01-01

    Two search techniques are considered for solving satellite synthesis problems. Neither is likely to find a globally optimal solution. In order to determine which method performs better and what factors affect their performance, an experiment is designed and the same problem is solved under a variety of starting solution configuration-algorithm combinations. Since there is no randomization in the experiment, results of practical, rather than statistical, significance are presented. Implementation of a cyclic coordinate search procedure clearly finds better synthesis solutions than implementation of a gradient search procedure does with the objective of maximizing the minimum C/I ratio computed at test points on the perimeters of the intended service areas. The length of the available orbital arc and the configuration of the starting solution are shown to affect the quality of the solutions found.

  9. BCM Search Launcher--an integrated interface to molecular biology data base search and analysis services available on the World Wide Web.

    PubMed

    Smith, R F; Wiese, B A; Wojzynski, M K; Davison, D B; Worley, K C

    1996-05-01

    The BCM Search Launcher is an integrated set of World Wide Web (WWW) pages that organize molecular biology-related search and analysis services available on the WWW by function, and provide a single point of entry for related searches. The Protein Sequence Search Page, for example, provides a single sequence entry form for submitting sequences to WWW servers that offer remote access to a variety of different protein sequence search tools, including BLAST, FASTA, Smith-Waterman, BEAUTY, PROSITE, and BLOCKS searches. Other Launch pages provide access to (1) nucleic acid sequence searches, (2) multiple and pair-wise sequence alignments, (3) gene feature searches, (4) protein secondary structure prediction, and (5) miscellaneous sequence utilities (e.g., six-frame translation). The BCM Search Launcher also provides a mechanism to extend the utility of other WWW services by adding supplementary hypertext links to results returned by remote servers. For example, links to the NCBI's Entrez data base and to the Sequence Retrieval System (SRS) are added to search results returned by the NCBI's WWW BLAST server. These links provide easy access to auxiliary information, such as Medline abstracts, that can be extremely helpful when analyzing BLAST data base hits. For new or infrequent users of sequence data base search tools, we have preset the default search parameters to provide the most informative first-pass sequence analysis possible. We have also developed a batch client interface for Unix and Macintosh computers that allows multiple input sequences to be searched automatically as a background task, with the results returned as individual HTML documents directly to the user's system. The BCM Search Launcher and batch client are available on the WWW at URL http:@gc.bcm.tmc.edu:8088/search-launcher.html.

  10. Generalized Method for the User Evaluation of Purchased Information Services. Report Number Three; Monthly Report (October 1 to November 30, 1975).

    ERIC Educational Resources Information Center

    Hall, Homer J.

    Four case histories were studied in an on-going project to develop a method for user selection of purchased scientific and technical information services. The issues involved were: (1) the value of computer search services to a small branch of a company technical library; (2) the special decision-making factors used for selecting items of very…

  11. Use of Computers in Human Factors Engineering

    DTIC Science & Technology

    1974-11-01

    SUBJ ECT TO CHAN4GE IS. SUPPLEMENTARY NOTES Reproduced by NATIONAL TECHNICAL INFORMATION SERVICE U S Department of Commerce Springfield VA 22151 19...UNCLASSIFIED DDC REPORT BIBLIOGRAPHY SEARCH CONTROL NO. /ZHK13 AD-255 518 APPLIED PSYCHOLOGICAL SERVICES VILLANOVA PA TECHNIllUES FOR EVALUATING...UNCLASSIFIED /ZHK13 UNCLASbSF!ED DDC REPORT BIBLIOGRAPHY SEA?C- CONTROL NO. /ZHKI3 AD-41q 28 APPLIED PSYCHOLOGICAL SERVICES WAYNE PA TECHNIQUES FOR

  12. Urban Typologies: Towards an ORNL Urban Information System (UrbIS)

    NASA Astrophysics Data System (ADS)

    KC, B.; King, A. W.; Sorokine, A.; Crow, M. C.; Devarakonda, R.; Hilbert, N. L.; Karthik, R.; Patlolla, D.; Surendran Nair, S.

    2016-12-01

    Urban environments differ in a large number of key attributes; these include infrastructure, morphology, demography, and economic and social variables, among others. These attributes determine many urban properties such as energy and water consumption, greenhouse gas emissions, air quality, public health, sustainability, and vulnerability and resilience to climate change. Characterization of urban environments by a single property such as population size does not sufficiently capture this complexity. In addressing this multivariate complexity one typically faces such problems as disparate and scattered data, challenges of big data management, spatial searching, insufficient computational capacity for data-driven analysis and modelling, and the lack of tools to quickly visualize the data and compare the analytical results across different cities and regions. We have begun the development of an Urban Information System (UrbIS) to address these issues, one that embraces the multivariate "big data" of urban areas and their environments across the United States utilizing the Big Data as a Service (BDaaS) concept. With technological roots in High-performance Computing (HPC), BDaaS is based on the idea of outsourcing computations to different computing paradigms, scalable to super-computers. UrbIS aims to incorporate federated metadata search, integrated modeling and analysis, and geovisualization into a single seamless workflow. The system includes web-based 2D/3D visualization with an iGlobe interface, fast cloud-based and server-side data processing and analysis, and a metadata search engine based on the Mercury data search system developed at Oak Ridge National Laboratory (ORNL). Results of analyses will be made available through web services. We are implementing UrbIS in ORNL's Compute and Data Environment for Science (CADES) and are leveraging ORNL experience in complex data and geospatial projects. The development of UrbIS is being guided by an investigation of urban heat islands (UHI) using high-dimensional clustering and statistics to define urban typologies (types of cities) in an investigation of how UHI vary with urban type across the United States.

  13. Searching for SNPs with cloud computing

    PubMed Central

    2009-01-01

    As DNA sequencing outpaces improvements in computer speed, there is a critical need to accelerate tasks like alignment and SNP calling. Crossbow is a cloud-computing software tool that combines the aligner Bowtie and the SNP caller SOAPsnp. Executing in parallel using Hadoop, Crossbow analyzes data comprising 38-fold coverage of the human genome in three hours using a 320-CPU cluster rented from a cloud computing service for about $85. Crossbow is available from http://bowtie-bio.sourceforge.net/crossbow/. PMID:19930550

  14. The effectiveness of M-health technologies for improving health and health services: a systematic review protocol

    PubMed Central

    2010-01-01

    Background The application of mobile computing and communication technology is rapidly expanding in the fields of health care and public health. This systematic review will summarise the evidence for the effectiveness of mobile technology interventions for improving health and health service outcomes (M-health) around the world. Findings To be included in the review interventions must aim to improve or promote health or health service use and quality, employing any mobile computing and communication technology. This includes: (1) interventions designed to improve diagnosis, investigation, treatment, monitoring and management of disease; (2) interventions to deliver treatment or disease management programmes to patients, health promotion interventions, and interventions designed to improve treatment compliance; and (3) interventions to improve health care processes e.g. appointment attendance, result notification, vaccination reminders. A comprehensive, electronic search strategy will be used to identify controlled studies, published since 1990, and indexed in MEDLINE, EMBASE, PsycINFO, Global Health, Web of Science, the Cochrane Library, or the UK NHS Health Technology Assessment database. The search strategy will include terms (and synonyms) for the following mobile electronic devices (MEDs) and a range of compatible media: mobile phone; personal digital assistant (PDA); handheld computer (e.g. tablet PC); PDA phone (e.g. BlackBerry, Palm Pilot); Smartphone; enterprise digital assistant; portable media player (i.e. MP3 or MP4 player); handheld video game console. No terms for health or health service outcomes will be included, to ensure that all applications of mobile technology in public health and health services are identified. Bibliographies of primary studies and review articles meeting the inclusion criteria will be searched manually to identify further eligible studies. Data on objective and self-reported outcomes and study quality will be independently extracted by two review authors. Where there are sufficient numbers of similar interventions, we will calculate and report pooled risk ratios or standardised mean differences using meta-analysis. Discussion This systematic review will provide recommendations on the use of mobile computing and communication technology in health care and public health and will guide future work on intervention development and primary research in this field. PMID:20925916

  15. The effectiveness of M-health technologies for improving health and health services: a systematic review protocol.

    PubMed

    Free, Caroline; Phillips, Gemma; Felix, Lambert; Galli, Leandro; Patel, Vikram; Edwards, Philip

    2010-10-06

    The application of mobile computing and communication technology is rapidly expanding in the fields of health care and public health. This systematic review will summarise the evidence for the effectiveness of mobile technology interventions for improving health and health service outcomes (M-health) around the world. To be included in the review interventions must aim to improve or promote health or health service use and quality, employing any mobile computing and communication technology. This includes: (1) interventions designed to improve diagnosis, investigation, treatment, monitoring and management of disease; (2) interventions to deliver treatment or disease management programmes to patients, health promotion interventions, and interventions designed to improve treatment compliance; and (3) interventions to improve health care processes e.g. appointment attendance, result notification, vaccination reminders.A comprehensive, electronic search strategy will be used to identify controlled studies, published since 1990, and indexed in MEDLINE, EMBASE, PsycINFO, Global Health, Web of Science, the Cochrane Library, or the UK NHS Health Technology Assessment database. The search strategy will include terms (and synonyms) for the following mobile electronic devices (MEDs) and a range of compatible media: mobile phone; personal digital assistant (PDA); handheld computer (e.g. tablet PC); PDA phone (e.g. BlackBerry, Palm Pilot); Smartphone; enterprise digital assistant; portable media player (i.e. MP3 or MP4 player); handheld video game console. No terms for health or health service outcomes will be included, to ensure that all applications of mobile technology in public health and health services are identified. Bibliographies of primary studies and review articles meeting the inclusion criteria will be searched manually to identify further eligible studies. Data on objective and self-reported outcomes and study quality will be independently extracted by two review authors. Where there are sufficient numbers of similar interventions, we will calculate and report pooled risk ratios or standardised mean differences using meta-analysis. This systematic review will provide recommendations on the use of mobile computing and communication technology in health care and public health and will guide future work on intervention development and primary research in this field.

  16. Information-seeking behavior changes in community-based teaching practices.

    PubMed

    Byrnes, Jennifer A; Kulick, Tracy A; Schwartz, Diane G

    2004-07-01

    A National Library of Medicine information access grant allowed for a collaborative project to provide computer resources in fourteen clinical practice sites that enabled health care professionals to access medical information via PubMed and the Internet. Health care professionals were taught how to access quality, cost-effective information that was user friendly and would result in improved patient care. Selected sites were located in medically underserved areas and received a computer, a printer, and, during year one, a fax machine. Participants were provided dial-up Internet service or were connected to the affiliated hospital's network. Clinicians were trained in how to search PubMed as a tool for practicing evidence-based medicine and to support clinical decision making. Health care providers were also taught how to find patient-education materials and continuing education programs and how to network with other professionals. Prior to the training, participants completed a questionnaire to assess their computer skills and familiarity with searching the Internet, MEDLINE, and other health-related databases. Responses indicated favorable changes in information-seeking behavior, including an increased frequency in conducting MEDLINE searches and Internet searches for work-related information.

  17. FIREDOC users manual, 3rd edition

    NASA Astrophysics Data System (ADS)

    Jason, Nora H.

    1993-12-01

    FIREDOC is the on-line bibliographic database which reflects the holdings (published reports, journal articles, conference proceedings, books, and audiovisual items) of the Fire Research Information Services (FRIS) at the Building and Fire Research Laboratory (BFRL), National Institute of Standards and Technology (NIST). This manual provides step-by-step procedures for entering and exiting the database via telecommunication lines, as well as a number of techniques for searching the database and processing the results of the searches. This Third Edition is necessitated by the change to a UNIX platform. The new computer allows for faster response time if searching via a modem and, in addition, offers internet accessibility. FIREDOC may be used with personal computers, using DOS or Windows, or with Macintosh computers and workstations. A new section on how to access Internet is included, and one on how to obtain the references of interest to you. Appendix F: Quick Guide to Getting Started will be useful to both modem and Internet users.

  18. BioModels.net Web Services, a free and integrated toolkit for computational modelling software.

    PubMed

    Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille

    2010-05-01

    Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.

  19. Dual Career Services

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  20. Expanding Internationally: OCLC Gears Up.

    ERIC Educational Resources Information Center

    Chepesiuk, Ron

    1997-01-01

    Describes the Online Computer Library Center (OCLC) efforts in China, Germany, Canada, Scotland, Jamaica and Brazil. Discusses FirstSearch, an end-user reference service, and WorldCat, a bibliographic database. Highlights international projects developing increased OCLC online availability, database loading software, CD-ROM cataloging,…

  1. Computer screening for palliative care needs in primary care: a mixed-methods study.

    PubMed

    Mason, Bruce; Boyd, Kirsty; Steyn, John; Kendall, Marilyn; Macpherson, Stella; Murray, Scott A

    2018-05-01

    Though the majority of people could benefit from palliative care before they die, most do not receive this approach, especially those with multimorbidity and frailty. GPs find it difficult to identify such patients. To refine and evaluate the utility of a computer application (AnticiPal) to help primary care teams screen their registered patients for people who could benefit from palliative care. A mixed-methods study of eight GP practices in Scotland, conducted in 2016-2017. After a search development cycle the authors adopted a mixed-methods approach, combining analysis of the number of people identified by the search with qualitative observations of the computer search as used by primary care teams, and interviews with professionals and patients. The search identified 0.8% of 62 708 registered patients. A total of 27 multidisciplinary meetings were observed, and eight GPs and 10 patients were interviewed. GPs thought the search identified many unrecognised patients with advanced multimorbidity and frailty, but were concerned about workload implications of assessment and care planning. Patients and carers endorsed the value of proactive identification of people with advanced illness. GP practices can use computer searching to generate lists of patients for review and care planning. The challenges of starting a conversation about the future remain. However, most patients regard key components of palliative care (proactive planning, including sharing information with urgent care services) as important. Screening for people with deteriorating health at risk from unplanned care is a current focus for quality improvement and should not be limited by labelling it solely as 'palliative care'. © British Journal of General Practice 2018.

  2. Information-seeking behavior changes in community-based teaching practices*†

    PubMed Central

    Byrnes, Jennifer A.; Kulick, Tracy A.; Schwartz, Diane G.

    2004-01-01

    A National Library of Medicine information access grant allowed for a collaborative project to provide computer resources in fourteen clinical practice sites that enabled health care professionals to access medical information via PubMed and the Internet. Health care professionals were taught how to access quality, cost-effective information that was user friendly and would result in improved patient care. Selected sites were located in medically underserved areas and received a computer, a printer, and, during year one, a fax machine. Participants were provided dial-up Internet service or were connected to the affiliated hospital's network. Clinicians were trained in how to search PubMed as a tool for practicing evidence-based medicine and to support clinical decision making. Health care providers were also taught how to find patient-education materials and continuing education programs and how to network with other professionals. Prior to the training, participants completed a questionnaire to assess their computer skills and familiarity with searching the Internet, MEDLINE, and other health-related databases. Responses indicated favorable changes in information-seeking behavior, including an increased frequency in conducting MEDLINE searches and Internet searches for work-related information. PMID:15243639

  3. Annual Progress Report for July 1, 1980 through June 30, 1981,

    DTIC Science & Technology

    1981-08-01

    71 14.4 Directory of Computer-Readable Bibliographic Databases .......... 73 14.5 University of Illinois Online Search Service...34Aeasures of Human Performance in Fault Diagnosis Tasks," M.S.I.E. Thesis (July 1931). 13.35 ). R. Morehead, "Models of Human Behavior in Online Searching...1981 , to appear). 1 Journal Articles 14.7 A. E. Williams, Databases and Online Statistics for 1979," Bul. Amer. Soc. for Information Science 7(2

  4. Conceptual Design of a Robotic Loader System for Remote Missile Launchers.

    DTIC Science & Technology

    1985-09-01

    artifcial intelligence were sur- veyed in order to assess their space applicability and to identify areas which can be developed/adapted to European...such data bases as NTIS and COMPENDEX. The second computer aided search was done through the U. S. Army information services at Redstone Arsenal...Lockheed Corporation. The first DIALOG data base explored was NTIS (National Technical Information Services, U.S. Dept. of Commerce), which contains

  5. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools. At present, we are working to define a model for monitoring-as-a-service, based on the tools described above, which the Cloud tenants can easily configure to suit their specific needs.

  6. CD-ROM Growth: Unleashing the Potential.

    ERIC Educational Resources Information Center

    Nelson, Nancy Melin

    1991-01-01

    Discusses the use of CD-ROMs in library processing and public services units. Topics discussed include local area networks, workstations, network security, search software, disk operating systems (DOS), computer viruses, CD-ROM selection and acquisition, licensing, and standards. A sidebar lists current CD-ROM products appropriate for reference…

  7. 32 CFR 286.29 - Collection of fees and fee rates.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Audiovisual documentary materials. Search costs are computed as for any other record. Duplication cost is the.... Audiovisual materials provided to a requester need not be in reproducible format or quality. (f) Other records... manner described for audiovisual documentary material. (g) Costs for special services. Complying with...

  8. Lifelong Learning: Skills and Online Resources

    ERIC Educational Resources Information Center

    Lim, Russell F.; Hsiung, Bob C.; Hales, Deborah J.

    2006-01-01

    Objective: Advances in information technology enable the practicing psychiatrist's quest to keep up-to-date with new discoveries in psychiatry, as well as to meet recertification requirements. However, physicians' computer skills do not always keep up with technology, nor do they take advantage of online search and continuing education services.…

  9. Harmonised information exchange between decentralised food composition database systems.

    PubMed

    Pakkala, H; Christensen, T; de Victoria, I Martínez; Presser, K; Kadvan, A

    2010-11-01

    The main aim of the European Food Information Resource (EuroFIR) project is to develop and disseminate a comprehensive, coherent and validated data bank for the distribution of food composition data (FCD). This can only be accomplished by harmonising food description and data documentation and by the use of standardised thesauri. The data bank is implemented through a network of local FCD storages (usually national) under the control and responsibility of the local (national) EuroFIR partner. The implementation of the system based on the EuroFIR specifications is under development. The data interchange happens through the EuroFIR Web Services interface, allowing the partners to implement their system using methods and software suitable for the local computer environment. The implementation uses common international standards, such as Simple Object Access Protocol, Web Service Description Language and Extensible Markup Language (XML). A specifically constructed EuroFIR search facility (eSearch) was designed for end users. The EuroFIR eSearch facility compiles queries using a specifically designed Food Data Query Language and sends a request to those network nodes linked to the EuroFIR Web Services that will most likely have the requested information. The retrieved FCD are compiled into a specifically designed data interchange format (the EuroFIR Food Data Transport Package) in XML, which is sent back to the EuroFIR eSearch facility as the query response. The same request-response operation happens in all the nodes that have been selected in the EuroFIR eSearch facility for a certain task. Finally, the FCD are combined by the EuroFIR eSearch facility and delivered to the food compiler. The implementation of FCD interchange using decentralised computer systems instead of traditional data-centre models has several advantages. First of all, the local partners have more control over their FCD, which will increase commitment and improve quality. Second, a multicentred solution is more economically viable than the creation of a centralised data bank, because of the lack of national political support for multinational systems.

  10. Online access to journal abstracts and articles.

    PubMed

    Giedd, J N; Smith, K G

    1997-01-01

    Advances in information technology now offer several options for child and adolescent psychopharmacologists to navigate the increasingly complex terrain of scientific literature and keep abreast of the rapidly changing advances in our field. MEDLINE, the world's largest database of medical literature, can be accessed and searched by a variety of free or fee-based services. In addition to efficient retrieval of citations and abstracts based on subject, author, or title, many of these services now provide, for a fee, the entire text and graphics of articles (displayed on computer screen, faxed, or mailed). There are also current awareness services to alert the user when new requested literature become available as well as services to send via e-mail the tables of contents of requested journals (sometimes prior to paper publication). For online citation and abstract retrieval, we found that free services, such as PubMed, performed as good or better than fee-based services. Physicians' Online, sponsored by the pharmaceutical industry, offered the lowest price for full-text manuscript delivery. In this article, we review literature search, delivery, and update services and offer some tips on how to most effectively use these resources.

  11. A Contextual Information Acquisition Approach Based on Semantics and Mashup Technology

    NASA Astrophysics Data System (ADS)

    He, Yangfan; Li, Lu; He, Keqing; Chen, Xiuhong

    Pay per use is an essential feature of cloud computing. Users can make use of some parts of a large scale service to satisfy their requirements, merely at the cost of a little payment. A good understanding of the users' requirement is a prerequisite for choosing the service in need precisely. Context implies users' potential requirements, which can be a complement to the requirements delivered explicitly. However, traditional context-aware computing research always demands some specific kinds of sensors to acquire contextual information, which renders a threshold too high for an application to become context-aware. This paper comes up with an approach which combines contextual information obtained directly and indirectly from the cloud services. Semantic relationship between different kinds of contexts lays foundation for the searching of the cloud services. And mashup technology is adopted to compose the heterogonous services. Abundant contextual information may lend strong support to a comprehensive understanding of users' context and a bettered abstraction of contextual requirements.

  12. Government Cloud Computing Policies: Potential Opportunities for Advancing Military Biomedical Research.

    PubMed

    Lebeda, Frank J; Zalatoris, Jeffrey J; Scheerer, Julia B

    2018-02-07

    This position paper summarizes the development and the present status of Department of Defense (DoD) and other government policies and guidances regarding cloud computing services. Due to the heterogeneous and growing biomedical big datasets, cloud computing services offer an opportunity to mitigate the associated storage and analysis requirements. Having on-demand network access to a shared pool of flexible computing resources creates a consolidated system that should reduce potential duplications of effort in military biomedical research. Interactive, online literature searches were performed with Google, at the Defense Technical Information Center, and at two National Institutes of Health research portfolio information sites. References cited within some of the collected documents also served as literature resources. We gathered, selected, and reviewed DoD and other government cloud computing policies and guidances published from 2009 to 2017. These policies were intended to consolidate computer resources within the government and reduce costs by decreasing the number of federal data centers and by migrating electronic data to cloud systems. Initial White House Office of Management and Budget information technology guidelines were developed for cloud usage, followed by policies and other documents from the DoD, the Defense Health Agency, and the Armed Services. Security standards from the National Institute of Standards and Technology, the Government Services Administration, the DoD, and the Army were also developed. Government Services Administration and DoD Inspectors General monitored cloud usage by the DoD. A 2016 Government Accountability Office report characterized cloud computing as being economical, flexible and fast. A congressionally mandated independent study reported that the DoD was active in offering a wide selection of commercial cloud services in addition to its milCloud system. Our findings from the Department of Health and Human Services indicated that the security infrastructure in cloud services may be more compliant with the Health Insurance Portability and Accountability Act of 1996 regulations than traditional methods. To gauge the DoD's adoption of cloud technologies proposed metrics included cost factors, ease of use, automation, availability, accessibility, security, and policy compliance. Since 2009, plans and policies were developed for the use of cloud technology to help consolidate and reduce the number of data centers which were expected to reduce costs, improve environmental factors, enhance information technology security, and maintain mission support for service members. Cloud technologies were also expected to improve employee efficiency and productivity. Federal cloud computing policies within the last decade also offered increased opportunities to advance military healthcare. It was assumed that these opportunities would benefit consumers of healthcare and health science data by allowing more access to centralized cloud computer facilities to store, analyze, search and share relevant data, to enhance standardization, and to reduce potential duplications of effort. We recommend that cloud computing be considered by DoD biomedical researchers for increasing connectivity, presumably by facilitating communications and data sharing, among the various intra- and extramural laboratories. We also recommend that policies and other guidances be updated to include developing additional metrics that will help stakeholders evaluate the above mentioned assumptions and expectations. Published by Oxford University Press on behalf of the Association of Military Surgeons of the United States 2018. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  13. Sharing Service Resource Information for Application Integration in a Virtual Enterprise - Modeling the Communication Protocol for Exchanging Service Resource Information

    NASA Astrophysics Data System (ADS)

    Yamada, Hiroshi; Kawaguchi, Akira

    Grid computing and web service technologies enable us to use networked resources in a coordinated manner. An integrated service is made of individual services running on coordinated resources. In order to achieve such coordinated services autonomously, the initiator of a coordinated service needs to know detailed service resource information. This information ranges from static attributes like the IP address of the application server to highly dynamic ones like the CPU load. The most famous wide-area service discovery mechanism based on names is DNS. Its hierarchical tree organization and caching methods take advantage of the static information managed. However, in order to integrate business applications in a virtual enterprise, we need a discovery mechanism to search for the optimal resources based on the given a set of criteria (search keys). In this paper, we propose a communication protocol for exchanging service resource information among wide-area systems. We introduce the concept of the service domain that consists of service providers managed under the same management policy. This concept of the service domain is similar to that for autonomous systems (ASs). In each service domain, the service information provider manages the service resource information of service providers that exist in this service domain. The service resource information provider exchanges this information with other service resource information providers that belong to the different service domains. We also verified the protocol's behavior and effectiveness using a simulation model developed for proposed protocol.

  14. Providing Computer-Based Information Services to an Academic Community. Final Technical Report.

    ERIC Educational Resources Information Center

    Bayer, Bernard

    The Mechanized Information Center (MIC) at the Ohio State University conducts retrospective and current awareness searches for faculty, students, and staff using data bases for agriculture, chemistry, education, psychology, and social sciences, as well as a multidisciplinary data base. The final report includes (1) a description of the background…

  15. Techniques for Increasing the Efficiency of Automation Systems in School Library Media Centers.

    ERIC Educational Resources Information Center

    Caffarella, Edward P.

    1996-01-01

    Discusses methods of managing queues (waiting lines) to optimize the use of student computer stations in school library media centers and to make searches more efficient and effective. The three major factors in queue management are arrival interval of the patrons, service time, and number of stations. (Author/LRW)

  16. Balancing Exploration, Uncertainty Representation and Computational Time in Many-Objective Reservoir Policy Optimization

    NASA Astrophysics Data System (ADS)

    Zatarain-Salazar, J.; Reed, P. M.; Quinn, J.; Giuliani, M.; Castelletti, A.

    2016-12-01

    As we confront the challenges of managing river basin systems with a large number of reservoirs and increasingly uncertain tradeoffs impacting their operations (due to, e.g. climate change, changing energy markets, population pressures, ecosystem services, etc.), evolutionary many-objective direct policy search (EMODPS) solution strategies will need to address the computational demands associated with simulating more uncertainties and therefore optimizing over increasingly noisy objective evaluations. Diagnostic assessments of state-of-the-art many-objective evolutionary algorithms (MOEAs) to support EMODPS have highlighted that search time (or number of function evaluations) and auto-adaptive search are key features for successful optimization. Furthermore, auto-adaptive MOEA search operators are themselves sensitive to having a sufficient number of function evaluations to learn successful strategies for exploring complex spaces and for escaping from local optima when stagnation is detected. Fortunately, recent parallel developments allow coordinated runs that enhance auto-adaptive algorithmic learning and can handle scalable and reliable search with limited wall-clock time, but at the expense of the total number of function evaluations. In this study, we analyze this tradeoff between parallel coordination and depth of search using different parallelization schemes of the Multi-Master Borg on a many-objective stochastic control problem. We also consider the tradeoff between better representing uncertainty in the stochastic optimization, and simplifying this representation to shorten the function evaluation time and allow for greater search. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple competing objectives for hydropower production, urban water supply, recreation and environmental flows need to be balanced. Our results provide guidance for balancing exploration, uncertainty, and computational demands when using the EMODPS framework to discover key tradeoffs within the LSRB system.

  17. ADS Bumblebee comes of age

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto; Kurtz, Michael J.; Henneken, Edwin; Grant, Carolyn S.; Thompson, Donna M.; Chyla, Roman; McDonald, Steven; Shaulis, Taylor J.; Blanco-Cuaresma, Sergi; Shapurian, Golnaz; Hostetler, Timothy W.; Templeton, Matthew R.; Lockhart, Kelly E.

    2018-01-01

    The ADS Team has been working on a new system architecture and user interface named “ADS Bumblebee” since 2015. The new system presents many advantages over the traditional ADS interface and search engine (“ADS Classic”). A new, state of the art search engine features a number of new capabilities such as full-text search, advanced citation queries, filtering of results and scalable analytics for any search results. Its services are built on a cloud computing platform which can be easily scaled to match user demand. The Bumblebee user interface is a rich javascript application which leverages the features of the search engine and integrates a number of additional visualizations such as co-author and co-citation networks which provide a hierarchical view of research groups and research topics, respectively. Displays of paper analytics provide views of the basic article metrics (citations, reads, and age). All visualizations are interactive and provide ways to further refine search results. This new search system, which has been in beta for the past three years, has now matured to the point that it provides feature and content parity with ADS Classic, and has become the recommended way to access ADS content and services. Following a successful transition to Bumblebee, the use of ADS Classic will be discouraged starting in 2018 and phased out in 2019. You can access our new interface at https://ui.adsabs.harvard.edu

  18. Engineering calculations for communications satellite systems planning

    NASA Technical Reports Server (NTRS)

    Martin, C. H.; Gonsalvez, D. J.; Levis, C. A.; Wang, C. W.

    1983-01-01

    Progress is reported on a computer code to improve the efficiency of spectrum and orbit utilization for the Broadcasting Satellite Service in the 12 GHz band for Region 2. It implements a constrained gradient search procedure using an exponential objective function based on aggregate signal to noise ratio and an extended line search in the gradient direction. The procedure is tested against a manually generated initial scenario and appears to work satisfactorily. In this test it was assumed that alternate channels use orthogonal polarizations at any one satellite location.

  19. Tracking-Data-Conversion Tool

    NASA Technical Reports Server (NTRS)

    Flora-Adams, Dana; Makihara, Jeanne; Benenyan, Zabel; Berner, Jeff; Kwok, Andrew

    2007-01-01

    Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-topeer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.

  20. Towards Simpler Custom and OpenSearch Services for Voluminous NEWS Merged A-Train Data (Invited)

    NASA Astrophysics Data System (ADS)

    Hua, H.; Fetzer, E.; Braverman, A. J.; Lewis, S.; Henderson, M. L.; Guillaume, A.; Lee, S.; de La Torre Juarez, M.; Dang, H. T.

    2010-12-01

    To simplify access to large and complex satellite data sets for climate analysis and model verification, we developed web services that is used to study long-term and global-scale trends in climate, water and energy cycle, and weather variability. A related NASA Energy and Water Cycle Study (NEWS) task has created a merged NEWS Level 2 data from multiple instruments in NASA’s A-Train constellation of satellites. We used this data to enable creation of climatologies that include correlation between observed temperature, water vapor and cloud properties from the A-Train sensors. Instead of imposing on the user an often rigid and limiting web-based analysis environment, we recognize the need for simple and well-designed services so that users can perform analysis in their own familiar computing environments. Custom on-demand services were developed to improve data accessibility of voluminous multi-sensor data. Services enabling geospatial, geographical, and multi-sensor parameter subsets of the data, as well a custom time-averaged Level 3 service will be presented. We will also show how a Level 3Q data reduction approach can be used to help “browse” the voluminous multi-sensor Level 2 data. An OpenSearch capability with full text + space + time search of data products will also be presented as an approach to facilitated interoperability with other data systems. We will present our experiences for improving user usability as well as strategies for facilitating interoperability with other data systems.

  1. Four-Year Summary, Educational and Commercial Utilization of a Chemical Information Center, Part II.

    ERIC Educational Resources Information Center

    Schipma, Peter B., Ed.

    The major objective of the Illinois Institute of Technology Retrieval Institute (IITRI) Computer Search Center (CSC) is to educate and link industry, academia, and government institutions to chemical and other scientific information systems and sources. The CSC is in full operation providing services to users from a variety of machine-readable…

  2. Educational and Commercial Utilization of a Chemical Information Center, Four Year Summary.

    ERIC Educational Resources Information Center

    Williams, Martha E.; And Others

    The major objective of the IITRI Computer Search Center is to educate and link industry, academia, and government institutions to chemical and other scientific information systems and sources. The Center was developed to meet this objective and is in full operation providing services to users from a variety of machine-readable data bases with…

  3. Four-Year Summary, Educational and Commercial Utilization of a Chemical Information Center. Part I.

    ERIC Educational Resources Information Center

    Schipma, Peter B., Ed.

    The major objective of the Illinois Institute of Technology (IIT) Computer Search Center (CSC) is to educate and link industry, academia, and government institutions to chemical and other scientific information systems and sources. The CSC is in full operation providing services to users from a variety of machine-readable data bases with minimal…

  4. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    PubMed

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Binary Bees Algorithm - bioinspiration from the foraging mechanism of honeybees to optimize a multiobjective multidimensional assignment problem

    NASA Astrophysics Data System (ADS)

    Xu, Shuo; Ji, Ze; Truong Pham, Duc; Yu, Fan

    2011-11-01

    The simultaneous mission assignment and home allocation for hospital service robots studied is a Multidimensional Assignment Problem (MAP) with multiobjectives and multiconstraints. A population-based metaheuristic, the Binary Bees Algorithm (BBA), is proposed to optimize this NP-hard problem. Inspired by the foraging mechanism of honeybees, the BBA's most important feature is an explicit functional partitioning between global search and local search for exploration and exploitation, respectively. Its key parts consist of adaptive global search, three-step elitism selection (constraint handling, non-dominated solutions selection, and diversity preservation), and elites-centred local search within a Hamming neighbourhood. Two comparative experiments were conducted to investigate its single objective optimization, optimization effectiveness (indexed by the S-metric and C-metric) and optimization efficiency (indexed by computational burden and CPU time) in detail. The BBA outperformed its competitors in almost all the quantitative indices. Hence, the above overall scheme, and particularly the searching history-adapted global search strategy was validated.

  6. Iterated local search algorithm for solving the orienteering problem with soft time windows.

    PubMed

    Aghezzaf, Brahim; Fahim, Hassan El

    2016-01-01

    In this paper we study the orienteering problem with time windows (OPTW) and the impact of relaxing the time windows on the profit collected by the vehicle. The way of relaxing time windows adopted in the orienteering problem with soft time windows (OPSTW) that we study in this research is a late service relaxation that allows linearly penalized late services to customers. We solve this problem heuristically by considering a hybrid iterated local search. The results of the computational study show that the proposed approach is able to achieve promising solutions on the OPTW test instances available in the literature, one new best solution is found. On the newly generated test instances of the OPSTW, the results show that the profit collected by the OPSTW is better than the profit collected by the OPTW.

  7. Key Technology Research on Open Architecture for The Sharing of Heterogeneous Geographic Analysis Models

    NASA Astrophysics Data System (ADS)

    Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.

    2013-10-01

    In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.

  8. BioEve Search: A Novel Framework to Facilitate Interactive Literature Search

    PubMed Central

    Ahmed, Syed Toufeeq; Davulcu, Hasan; Tikves, Sukru; Nair, Radhika; Zhao, Zhongming

    2012-01-01

    Background. Recent advances in computational and biological methods in last two decades have remarkably changed the scale of biomedical research and with it began the unprecedented growth in both the production of biomedical data and amount of published literature discussing it. An automated extraction system coupled with a cognitive search and navigation service over these document collections would not only save time and effort, but also pave the way to discover hitherto unknown information implicitly conveyed in the texts. Results. We developed a novel framework (named “BioEve”) that seamlessly integrates Faceted Search (Information Retrieval) with Information Extraction module to provide an interactive search experience for the researchers in life sciences. It enables guided step-by-step search query refinement, by suggesting concepts and entities (like genes, drugs, and diseases) to quickly filter and modify search direction, and thereby facilitating an enriched paradigm where user can discover related concepts and keywords to search while information seeking. Conclusions. The BioEve Search framework makes it easier to enable scalable interactive search over large collection of textual articles and to discover knowledge hidden in thousands of biomedical literature articles with ease. PMID:22693501

  9. Automating Veterans Administration libraries: II. Implementation at the Kansas City Medical Center Library.

    PubMed

    Smith, V K; Ting, S C

    1987-04-01

    In 1985, the Kansas City Veterans Administration Medical Center began implementation of the Decentralized Hospital Computer Program (DHCP). An integrated library system, a subset of that program, was started by the medical library for acquisitions and an outline catalog. To test the system, staff of the Neurology Service were trained to use the outline catalog and electronic mail to request interlibrary loans and literature searches. In implementing the project with the Neurology Service, the library is paving the way for many types of electronic access and interaction with the library.

  10. Forms of the Materials Shared between a Teacher and a Pupil

    ERIC Educational Resources Information Center

    Klubal, Libor; Kostolányová, Katerina

    2016-01-01

    Methods of using ICT is hereby amended. We merge from the original model of work on one computer to the model of cloud services and mobile touch screen devices use. Way of searching for and delivering of information between a pupil and a teacher is closely related with this matter as well. This work detects common and preferred procedures of…

  11. GPU-based cloud service for Smith-Waterman algorithm using frequency distance filtration scheme.

    PubMed

    Lee, Sheng-Ta; Lin, Chun-Yuan; Hung, Che Lun

    2013-01-01

    As the conventional means of analyzing the similarity between a query sequence and database sequences, the Smith-Waterman algorithm is feasible for a database search owing to its high sensitivity. However, this algorithm is still quite time consuming. CUDA programming can improve computations efficiently by using the computational power of massive computing hardware as graphics processing units (GPUs). This work presents a novel Smith-Waterman algorithm with a frequency-based filtration method on GPUs rather than merely accelerating the comparisons yet expending computational resources to handle such unnecessary comparisons. A user friendly interface is also designed for potential cloud server applications with GPUs. Additionally, two data sets, H1N1 protein sequences (query sequence set) and human protein database (database set), are selected, followed by a comparison of CUDA-SW and CUDA-SW with the filtration method, referred to herein as CUDA-SWf. Experimental results indicate that reducing unnecessary sequence alignments can improve the computational time by up to 41%. Importantly, by using CUDA-SWf as a cloud service, this application can be accessed from any computing environment of a device with an Internet connection without time constraints.

  12. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    PubMed

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  13. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment

    PubMed Central

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  14. An element search ant colony technique for solving virtual machine placement problem

    NASA Astrophysics Data System (ADS)

    Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.

    2017-09-01

    The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.

  15. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment

    PubMed Central

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-01-01

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment. PMID:28629131

  16. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment.

    PubMed

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-06-17

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment.

  17. 7 CFR 1.10 - Search services.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 1 2013-01-01 2013-01-01 false Search services. 1.10 Section 1.10 Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Official Records § 1.10 Search services. Search services are services of agency personnel—clerical or professional—used in trying to find the...

  18. Systematic review, critical appraisal, and analysis of the quality of economic evaluations in stroke imaging.

    PubMed

    Burton, Kirsteen R; Perlis, Nathan; Aviv, Richard I; Moody, Alan R; Kapral, Moira K; Krahn, Murray D; Laupacis, Andreas

    2014-03-01

    This study reviews the quality of economic evaluations of imaging after acute stroke and identifies areas for improvement. We performed full-text searches of electronic databases that included Medline, Econlit, the National Health Service Economic Evaluation Database, and the Tufts Cost Effectiveness Analysis Registry through July 2012. Search strategy terms included the following: stroke*; cost*; or cost-benefit analysis*; and imag*. Inclusion criteria were empirical studies published in any language that reported the results of economic evaluations of imaging interventions for patients with stroke symptoms. Study quality was assessed by a commonly used checklist (with a score range of 0% to 100%). Of 568 unique potential articles identified, 5 were included in the review. Four of 5 articles were explicit in their analysis perspectives, which included healthcare system payers, hospitals, and stroke services. Two studies reported results during a 5-year time horizon, and 3 studies reported lifetime results. All included the modified Rankin Scale score as an outcome measure. The median quality score was 84.4% (range=71.9%-93.5%). Most studies did not consider the possibility that patients could not tolerate contrast media or could incur contrast-induced nephropathy. Three studies compared perfusion computed tomography with unenhanced computed tomography but assumed that outcomes guided by the results of perfusion computed tomography were equivalent to outcomes guided by the results of magnetic resonance imaging or noncontrast computed tomography. Economic evaluations of imaging modalities after acute ischemic stroke were generally of high methodological quality. However, important radiology-specific clinical components were missing from all of these analyses.

  19. Concentrations of indoor pollutants database: User's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-05-01

    This manual describes the computer-based database on indoor air pollutants. This comprehensive database alloys helps utility personnel perform rapid searches on literature related to indoor air pollutants. Besides general information, it provides guidance for finding specific information on concentrations of indoor air pollutants. The manual includes information on installing and using the database as well as a tutorial to assist the user in becoming familiar with the procedures involved in doing bibliographic and summary section searches. The manual demonstrates how to search for information by going through a series of questions that provide search parameters such as pollutants type, year,more » building type, keywords (from a specific list), country, geographic region, author's last name, and title. As more and more parameters are specified, the list of references found in the data search becomes smaller and more specific to the user's needs. Appendixes list types of information that can be input into the database when making a request. The CIP database allows individual utilities to obtain information on indoor air quality based on building types and other factors in their own service territory. This information is useful for utilities with concerns about indoor air quality and the control of indoor air pollutants. The CIP database itself is distributed by the Electric Power Software Center and runs on IBM PC-compatible computers.« less

  20. Concentrations of indoor pollutants database: User`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-05-01

    This manual describes the computer-based database on indoor air pollutants. This comprehensive database alloys helps utility personnel perform rapid searches on literature related to indoor air pollutants. Besides general information, it provides guidance for finding specific information on concentrations of indoor air pollutants. The manual includes information on installing and using the database as well as a tutorial to assist the user in becoming familiar with the procedures involved in doing bibliographic and summary section searches. The manual demonstrates how to search for information by going through a series of questions that provide search parameters such as pollutants type, year,more » building type, keywords (from a specific list), country, geographic region, author`s last name, and title. As more and more parameters are specified, the list of references found in the data search becomes smaller and more specific to the user`s needs. Appendixes list types of information that can be input into the database when making a request. The CIP database allows individual utilities to obtain information on indoor air quality based on building types and other factors in their own service territory. This information is useful for utilities with concerns about indoor air quality and the control of indoor air pollutants. The CIP database itself is distributed by the Electric Power Software Center and runs on IBM PC-compatible computers.« less

  1. Web-based services for drug design and discovery.

    PubMed

    Frey, Jeremy G; Bird, Colin L

    2011-09-01

    Reviews of the development of drug discovery through the 20(th) century recognised the importance of chemistry and increasingly bioinformatics, but had relatively little to say about the importance of computing and networked computing in particular. However, the design and discovery of new drugs is arguably the most significant single application of bioinformatics and cheminformatics to have benefitted from the increases in the range and power of the computational techniques since the emergence of the World Wide Web, commonly now referred to as simply 'the Web'. Web services have enabled researchers to access shared resources and to deploy standardized calculations in their search for new drugs. This article first considers the fundamental principles of Web services and workflows, and then explores the facilities and resources that have evolved to meet the specific needs of chem- and bio-informatics. This strategy leads to a more detailed examination of the basic components that characterise molecules and the essential predictive techniques, followed by a discussion of the emerging networked services that transcend the basic provisions, and the growing trend towards embracing modern techniques, in particular the Semantic Web. In the opinion of the authors, the issues that require community action are: increasing the amount of chemical data available for open access; validating the data as provided; and developing more efficient links between the worlds of cheminformatics and bioinformatics. The goal is to create ever better drug design services.

  2. EarthExplorer

    USGS Publications Warehouse

    Houska, Treva

    2012-01-01

    The EarthExplorer trifold provides basic information for on-line access to remotely-sensed data from the U.S. Geological Survey Earth Resources Observation and Science (EROS) Center archive. The EarthExplorer (http://earthexplorer.usgs.gov/) client/server interface allows users to search and download aerial photography, satellite data, elevation data, land-cover products, and digitized maps. Minimum computer system requirements and customer service contact information also are included in the brochure.

  3. Virtually There--Transforming Gifted Education through New Technologies, Trends and Practices in Learning, International Communication and Global Education

    ERIC Educational Resources Information Center

    Eriksson, Gillian

    2012-01-01

    "It is the year 2025 and I am compiling this article for an instant VPD (videopod) that is streamed over the world. An EESR (Educational Expert Service Request) came from an empathetic computer HIAS (Hi, I am Sam) that matched my qualifications with a quest by online activists SFT (Searching for Truth) to examine global interactions in…

  4. Radio Signal Augmentation for Improved Training of a Convolutional Neural Network

    DTIC Science & Technology

    2016-09-01

    official government endorsement or approval of commercial products or services referenced in this report. Bluetooth ® is a registered...trademark of Bluetooth SIG, Inc.. Nuand™ and blade RF™ are trademarks of Nurand, LLC. Released by E. R. Buckland, Head IO Support to National... Bluetooth ® computer mouse, and Bluetooth ® search from a mobile cellular phone. Qualitatively, model Moffset dramatically outperformed model Mclean in

  5. Trident: scalable compute archives: workflows, visualization, and analysis

    NASA Astrophysics Data System (ADS)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub-work flows (3) ImageX, an interactive image visualization service (3) an authentication and authorization service (4) a data service that handles archival, staging and serving of data products, and (5) a notification service that serves statistical collation and reporting needs of various projects. Several other additional components are under development. Trident is an umbrella project, that evolved from the One Degree Imager, Portal, Pipeline, and Archive (ODI-PPA) project which we had initially refactored toward (1) a powerful analysis/visualization portal for Globular Cluster System (GCS) survey data collected by IU researchers, 2) a data search and download portal for the IU Electron Microscopy Center's data (EMC-SCA), 3) a prototype archive for the Ludwig Maximilian University's Wide Field Imager. The new Trident software has been used to deploy (1) a metadata quality control and analytics portal (RADY-SCA) for DICOM formatted medical imaging data produced by the IU Radiology Center, 2) Several prototype work flows for different domains, 3) a snapshot tool within IU's Karst Desktop environment, 4) a limited component-set to serve GIS data within the IU GIS web portal. Trident SCA systems leverage supercomputing and storage resources at Indiana University but can be configured to make use of any cloud/grid resource, from local workstations/servers to (inter)national supercomputing facilities such as XSEDE.

  6. Analysis of a librarian-mediated literature search service.

    PubMed

    Friesen, Carol; Lê, Mê-Linh; Cooke, Carol; Raynard, Melissa

    2015-01-01

    Librarian-mediated literature searching is a key service provided at medical libraries. This analysis outlines ten years of data on 19,248 literature searches and describes information on the volume and frequency of search requests, time spent per search, databases used, and professional designations of the patron requestors. Combined with information on best practices for expert searching and evaluations of similar services, these findings were used to form recommendations on the improvement and standardization of a literature search service at a large health library system.

  7. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    USGS Publications Warehouse

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  8. NASA spinoffs to public service

    NASA Technical Reports Server (NTRS)

    Ault, L. A.; Cleland, J. G.

    1989-01-01

    The National Aeronautics and Space Administration (NASA) Technology Utilization (TU) Division of the Office of Commercial Programs has been quite successful in directing the transfer to technology into the public sector. NASA developments of particular interest have been those in the areas of aerodynamics and aviation transport, safety, sensors, electronics and computing, and satellites and remote sensing. NASA technology has helped law enforcement, firefighting, public transportation, education, search and rescue, and practically every other sector of activity serving the U.S. public. NASA works closely with public service agencies and associations, especially those serving local needs of citizens, to expedite technology transfer benefits. A number of examples exist to demonstrate the technology transfer method and opportunities of NASA spinoffs to public service.

  9. Retrospective indexing (RI) - A computer-aided indexing technique

    NASA Technical Reports Server (NTRS)

    Buchan, Ronald L.

    1990-01-01

    An account is given of a method for data base-updating designated 'computer-aided indexing' (CAI) which has been very efficiently implemented at NASA's Scientific and Technical Information Facility by means of retrospective indexing. Novel terms added to the NASA Thesaurus will therefore proceed directly into both the NASA-RECON aerospace information system and its portion of the ESA-Information Retrieval Service, giving users full access to material thus indexed. If a given term appears in the title of a record, it is given special weight. An illustrative graphic representation of the CAI search strategy is presented.

  10. Outline of Toshiba Business Information Center

    NASA Astrophysics Data System (ADS)

    Nagata, Yoshihiro

    Toshiba Business Information Center gathers and stores inhouse and external business information used in common within the Toshiba Corp., and provides companywide circulation, reference and other services. The Center established centralized information management system by employing decentralized computers, electronic file apparatus (30cm laser disc) and other office automation equipments. Online retrieval through LAN is available to search the stored documents and increasing copying requests are processed by electronic file. This paper describes the purpose of establishment of the Center, the facilities, management scheme, systematization of the files and the present situation and plan of each information service.

  11. Automating Veterans Administration libraries: II. Implementation at the Kansas City Medical Center Library.

    PubMed Central

    Smith, V K; Ting, S C

    1987-01-01

    In 1985, the Kansas City Veterans Administration Medical Center began implementation of the Decentralized Hospital Computer Program (DHCP). An integrated library system, a subset of that program, was started by the medical library for acquisitions and an outline catalog. To test the system, staff of the Neurology Service were trained to use the outline catalog and electronic mail to request interlibrary loans and literature searches. In implementing the project with the Neurology Service, the library is paving the way for many types of electronic access and interaction with the library. PMID:3594023

  12. 19 CFR 162.12 - Service of search warrant.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Service of search warrant. 162.12 Section 162.12 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Search Warrants § 162.12 Service of search warrant. A...

  13. VITMO: A Case Study in Virtual Observatories as Data Portals and Development of Web Services as Search Tools

    NASA Astrophysics Data System (ADS)

    Smith, D.; Barnes, R. J.; Morrison, D.; Talaat, E. R.; Potter, M.; Patrone, D.; Weiss, M.; Sarris, T.

    2013-12-01

    Virtual Observatories are more than data portals that span multiple missions and data sets. They need to provide a system that is useable by a broad swath of people with different backgrounds. The great promise of Virtual Observatories is the ability to perform complex search operations on a large variety of different data sets. This allows the researcher to isolate and select the relevant measurements for their topic of study. The Virtual ITM Observatory (VITMO) is unique in having many diverse datasets that cover a large temporal and spatial range that present a unique search problem. VITMO provides many methods by which the user can search for and select data of interest including restricting selections based on geophysical conditions (solar wind speed, Kp, etc) as well as finding those datasets that overlap in time and/or space. We are developing a series of light-weight web services that will provide a new data search capability for VITMO and other VxOs. The services will consist of a database of spacecraft ephemerides and instrument fields of view; an overlap calculator to find times when the fields of view of different instruments intersect; and a magnetic field line tracing service that will map in situ and ground based measurements to the equatorial plane in magnetic coordinates for a number of field models and geophysical conditions. Each service on their own provides a useful new capability for virtual observatories; operating together they will provide a powerful new search tool. The ephemerides service is being built using the Navigation and Ancillary Information Facility (NAIF) SPICE toolkit (http://naif.jpl.nasa.gov/naif/index.html) allowing them to be extended to support any Earth orbiting satellite with the addition of the appropriate SPICE kernels or two-line element sets (TLE). An instrument kernel (IK) file will be used to describe the observational geometry of the instrument (e.g., Field-of-view size, shape, and orientation). The overlap calculator uses techniques borrowed from computer graphics to identify overlapping measurements in space and time. The calculator will allow a user defined uncertainty to be selected to allow 'near misses' to be found. The magnetic field tracing service will feature a database of pre-calculated field line tracings of ground stations but will also allow dynamic tracing of arbitrary coordinates. These services will allow the non-specialist user of VITMO to select data that they were previously unable to locate, opening up analysis opportunities beyond the instrument teams and making it much easier for future students who come into the field.

  14. A study to enhance clinical end-user MEDLINE search skills: design and baseline findings.

    PubMed

    McKibbon, K A; Haynes, R B; Johnston, M E; Walker, C J

    1991-01-01

    To determine if a preceptor and timely, individualized feedback improves the performance of physicians in searching MEDLINE using GRATEFUL MED in clinical settings. Randomized controlled trial. A 300 bed primary to tertiary care teaching hospital. Computers were installed in wards and clinics of 6 major clinical services, and the emergency room, intensive care and neonatal intensive care units. All physicians and physicians-in-training from the departments of Medicine, Family Medicine, Surgery, Psychiatry, Pediatrics, and Obstetrics and Gynecology were included if they made patient care decisions for at least 8 weeks during the study period. All participants were given a 1-hour training class and 1 hour of individualized searching with 1 of the 2 study librarians. After training, participants were randomized to a control group who received no further intervention or to an intervention group in which each person chose a clinical preceptor experienced in MEDLINE searching and received individualized feedback by a study librarian on their first 10 searches, indicating search quality and providing suggestions for improvement. Feedback was mailed the first week day after the search was done. Baseline characteristics by study group, department and level of training, study participation rates, and searching rates. 308 of 392 eligible physicians joined the study. Participation was almost 80% with some variation by department and level of training. Excellent balance in the baseline characteristics was achieved for the 2 groups, as well as for the number who did first searches. Intervention group participants searched MEDLINE more often than did controls (3.5 searches per month vs 2.5 per month for controls, P = 0.046). The recall and precision for first searches for both groups was significantly less than that of librarians. The analysis of study data will be completed by September 1991. Clinicians are willing to do self-service searching of MEDLINE in clinical settings but their precision and recall are less than a trained librarian at baseline. Search skills enhancements are needed and the effect of feedback and preceptors is being tested. U.S. National Library of Medicine and Ontario Ministry of Health.

  15. MR-Tandem: parallel X!Tandem using Hadoop MapReduce on Amazon Web Services.

    PubMed

    Pratt, Brian; Howbert, J Jeffry; Tasman, Natalie I; Nilsson, Erik J

    2012-01-01

    MR-Tandem adapts the popular X!Tandem peptide search engine to work with Hadoop MapReduce for reliable parallel execution of large searches. MR-Tandem runs on any Hadoop cluster but offers special support for Amazon Web Services for creating inexpensive on-demand Hadoop clusters, enabling search volumes that might not otherwise be feasible with the compute resources a researcher has at hand. MR-Tandem is designed to drop in wherever X!Tandem is already in use and requires no modification to existing X!Tandem parameter files, and only minimal modification to X!Tandem-based workflows. MR-Tandem is implemented as a lightly modified X!Tandem C++ executable and a Python script that drives Hadoop clusters including Amazon Web Services (AWS) Elastic Map Reduce (EMR), using the modified X!Tandem program as a Hadoop Streaming mapper and reducer. The modified X!Tandem C++ source code is Artistic licensed, supports pluggable scoring, and is available as part of the Sashimi project at http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/trans_proteomic_pipeline/extern/xtandem/. The MR-Tandem Python script is Apache licensed and available as part of the Insilicos Cloud Army project at http://ica.svn.sourceforge.net/viewvc/ica/trunk/mr-tandem/. Full documentation and a windows installer that configures MR-Tandem, Python and all necessary packages are available at this same URL. brian.pratt@insilicos.com

  16. A collaborative computer auditing system under SOA-based conceptual model

    NASA Astrophysics Data System (ADS)

    Cong, Qiushi; Huang, Zuoming; Hu, Jibing

    2013-03-01

    Some of the current challenges of computer auditing are the obstacles to retrieving, converting and translating data from different database schema. During the last few years, there are many data exchange standards under continuous development such as Extensible Business Reporting Language (XBRL). These XML document standards can be used for data exchange among companies, financial institutions, and audit firms. However, for many companies, it is still expensive and time-consuming to translate and provide XML messages with commercial application packages, because it is complicated and laborious to search and transform data from thousands of tables in the ERP databases. How to transfer transaction documents for supporting continuous auditing or real time auditing between audit firms and their client companies is a important topic. In this paper, a collaborative computer auditing system under SOA-based conceptual model is proposed. By utilizing the widely used XML document standards and existing data transformation applications developed by different companies and software venders, we can wrap these application as commercial web services that will be easy implemented under the forthcoming application environments: service-oriented architecture (SOA). Under the SOA environments, the multiagency mechanism will help the maturity and popularity of data assurance service over the Internet. By the wrapping of data transformation components with heterogeneous databases or platforms, it will create new component markets composed by many software vendors and assurance service companies to provide data assurance services for audit firms, regulators or third parties.

  17. A new method for E-government procurement using collaborative filtering and Bayesian approach.

    PubMed

    Zhang, Shuai; Xi, Chengyu; Wang, Yan; Zhang, Wenyu; Chen, Yanhong

    2013-01-01

    Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services' attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach.

  18. A New Method for E-Government Procurement Using Collaborative Filtering and Bayesian Approach

    PubMed Central

    Wang, Yan

    2013-01-01

    Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services' attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach. PMID:24385869

  19. Dynamic reusable workflows for ocean science

    USGS Publications Warehouse

    Signell, Richard; Fernandez, Filipe; Wilcox, Kyle

    2016-01-01

    Digital catalogs of ocean data have been available for decades, but advances in standardized services and software for catalog search and data access make it now possible to create catalog-driven workflows that automate — end-to-end — data search, analysis and visualization of data from multiple distributed sources. Further, these workflows may be shared, reused and adapted with ease. Here we describe a workflow developed within the US Integrated Ocean Observing System (IOOS) which automates the skill-assessment of water temperature forecasts from multiple ocean forecast models, allowing improved forecast products to be delivered for an open water swim event. A series of Jupyter Notebooks are used to capture and document the end-to-end workflow using a collection of Python tools that facilitate working with standardized catalog and data services. The workflow first searches a catalog of metadata using the Open Geospatial Consortium (OGC) Catalog Service for the Web (CSW), then accesses data service endpoints found in the metadata records using the OGC Sensor Observation Service (SOS) for in situ sensor data and OPeNDAP services for remotely-sensed and model data. Skill metrics are computed and time series comparisons of forecast model and observed data are displayed interactively, leveraging the capabilities of modern web browsers. The resulting workflow not only solves a challenging specific problem, but highlights the benefits of dynamic, reusable workflows in general. These workflows adapt as new data enters the data system, facilitate reproducible science, provide templates from which new scientific workflows can be developed, and encourage data providers to use standardized services. As applied to the ocean swim event, the workflow exposed problems with two of the ocean forecast products which led to improved regional forecasts once errors were corrected. While the example is specific, the approach is general, and we hope to see increased use of dynamic notebooks across the geoscience domains.

  20. An Approach to Dynamic Service Management in Pervasive Computing Systems

    DTIC Science & Technology

    2005-01-01

    standard interface to them that is easily accessible by any user. This paper outlines the design of Centaurus , an infrastructure for presenting...based on Extensi- ble Markup Language (XML) for communication, giving the system a uniform and easily adaptable interface. Centaurus defines a...easy and automatic usage. This is the vision that guides our re- search on the Centaurus system. We define a SmartSpace as a dynamic environment that

  1. Vehicle routing problem with time windows using natural inspired algorithms

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  2. Grist : grid-based data mining for astronomy

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden; hide

    2004-01-01

    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.

  3. Grist: Grid-based Data Mining for Astronomy

    NASA Astrophysics Data System (ADS)

    Jacob, J. C.; Katz, D. S.; Miller, C. D.; Walia, H.; Williams, R. D.; Djorgovski, S. G.; Graham, M. J.; Mahabal, A. A.; Babu, G. J.; vanden Berk, D. E.; Nichol, R.

    2005-12-01

    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the ``hyperatlas'' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.

  4. Fishing Forecasts

    NASA Technical Reports Server (NTRS)

    1988-01-01

    ROFFS stands for Roffer's Ocean Fishing Forecasting Service, Inc. Roffer combines satellite and computer technology with oceanographic information from several sources to produce frequently updated charts sometimes as often as 30 times a day showing clues to the location of marlin, sailfish, tuna, swordfish and a variety of other types. Also provides customized forecasts for racing boats and the shipping industry along with seasonal forecasts that allow the marine industry to formulate fishing strategies based on foreknowledge of the arrival and departure times of different fish. Roffs service exemplifies the potential for benefits to marine industries from satellite observations. Most notable results are reduced search time and substantial fuel savings.

  5. Towards a single seismological service infrastructure in Europe

    NASA Astrophysics Data System (ADS)

    Spinuso, A.; Trani, L.; Frobert, L.; Van Eck, T.

    2012-04-01

    In the last five year services and data providers, within the seismological community in Europe, focused their efforts in migrating the way of opening their archives towards a Service Oriented Architecture (SOA). This process tries to follow pragmatically the technological trends and available solutions aiming at effectively improving all the data stewardship activities. These advancements are possible thanks to the cooperation and the follow-ups of several EC infrastructural projects that, by looking at general purpose techniques, combine their developments envisioning a multidisciplinary platform for the earth observation as the final common objective (EPOS, Earth Plate Observation System) One of the first results of this effort is the Earthquake Data Portal (http://www.seismicportal.eu), which provides a collection of tools to discover, visualize and access a variety of seismological data sets like seismic waveform, accelerometric data, earthquake catalogs and parameters. The Portal offers a cohesive distributed search environment, linking data search and access across multiple data providers through interactive web-services, map-based tools and diverse command-line clients. Our work continues under other EU FP7 projects. Here we will address initiatives in two of those projects. The NERA, (Network of European Research Infrastructures for Earthquake Risk Assessment and Mitigation) project will implement a Common Services Architecture based on OGC services APIs, in order to provide Resource-Oriented common interfaces across the data access and processing services. This will improve interoperability between tools and across projects, enabling the development of higher-level applications that can uniformly access the data and processing services of all participants. This effort will be conducted jointly with the VERCE project (Virtual Earthquake and Seismology Research Community for Europe). VERCE aims to enable seismologists to exploit the wealth of seismic data within a data-intensive computation framework, which will be tailored to the specific needs of the community. It will provide a new interoperable infrastructure, as the computational backbone laying behind the publicly available interfaces. VERCE will have to face the challenges of implementing a service oriented architecture providing an efficient layer between the Data and the Grid infrastructures, coupling HPC data analysis and HPC data modeling applications through the execution of workflows and data sharing mechanism. Online registries of interoperable worklflow components, storage of intermediate results and data provenance are those aspects that are currently under investigations to make the VERCE facilities usable from a large scale of users, data and service providers. For such purposes the adoption of a Digital Object Architecture, to create online catalogs referencing and describing semantically all these distributed resources, such as datasets, computational processes and derivative products, is seen as one of the viable solution to monitor and steer the usage of the infrastructure, increasing its efficiency and the cooperation among the community.

  6. New NED XML/VOtable Services and Client Interface Applications

    NASA Astrophysics Data System (ADS)

    Pevunova, O.; Good, J.; Mazzarella, J.; Berriman, G. B.; Madore, B.

    2005-12-01

    The NASA/IPAC Extragalactic Database (NED) provides data and cross-identifications for over 7 million extragalactic objects fused from thousands of survey catalogs and journal articles. The data cover all frequencies from radio through gamma rays and include positions, redshifts, photometry and spectral energy distributions (SEDs), sizes, and images. NED services have traditionally supplied data in HTML format for connections from Web browsers, and a custom ASCII data structure for connections by remote computer programs written in the C programming language. We describe new services that provide responses from NED queries in XML documents compliant with the international virtual observatory VOtable protocol. The XML/VOtable services support cone searches, all-sky searches based on object attributes (survey names, cross-IDs, redshifts, flux densities), and requests for detailed object data. Initial services have been inserted into the NVO registry, and others will follow soon. The first client application is a Style Sheet specification for rendering NED VOtable query results in Web browsers that support XML. The second prototype application is a Java applet that allows users to compare multiple SEDs. The new XML/VOtable output mode will also simplify the integration of data from NED into visualization and analysis packages, software agents, and other virtual observatory applications. We show an example SED from NED plotted using VOPlot. The NED website is: http://nedwww.ipac.caltech.edu.

  7. THE ROLE OF SEARCHING SERVICES IN AN ACQUISITIONS PROGRAM.

    ERIC Educational Resources Information Center

    LUECK, ANTOINETTE L.; AND OTHERS

    A USER PRESENTS HIS POINT OF VIEW ON LITERATURE SEARCHING THROUGH THE MAJOR SEARCHING SERVICES IN THE OVERALL PROGRAM OF ACQUISITIONS FOR THE ENGINEERING STAFF OF THE AIR FORCE AERO PROPULSION LABORATORY. THESE MAJOR SEARCHING SERVICES INCLUDE THE DEFENSE DOCUMENTATION CENTER (DDC), THE NATIONAL AERONAUTICS AND SPACE ADMINISTRATION (NASA), THE…

  8. 78 FR 13624 - Proposed Information Collection; Comment Request; Age Search Service

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-28

    ... Search Service AGENCY: U.S. Census Bureau, Commerce. ACTION: Notice. SUMMARY: The Department of Commerce...- 3434; or: [email protected] . SUPPLEMENTARY INFORMATION I. Abstract Age Search is a service... inheritance. The Age Search forms are used by the public in order to provide the Census Bureau with the...

  9. Pharmacist Computer Skills and Needs Assessment Survey

    PubMed Central

    Jewesson, Peter J

    2004-01-01

    Background To use technology effectively for the advancement of patient care, pharmacists must possess a variety of computer skills. We recently introduced a novel applied informatics program in this Canadian hospital clinical service unit to enhance the informatics skills of our members. Objective This study was conducted to gain a better understanding of the baseline computer skills and needs of our hospital pharmacists immediately prior to the implementation of an applied informatics program. Methods In May 2001, an 84-question written survey was distributed by mail to 106 practicing hospital pharmacists in our multi-site, 1500-bed, acute-adult-tertiary care Canadian teaching hospital in Vancouver, British Columbia. Results Fifty-eight surveys (55% of total) were returned within the two-week study period. The survey responses reflected the opinions of licensed BSc and PharmD hospital pharmacists with a broad range of pharmacy practice experience. Most respondents had home access to personal computers, and regularly used computers in the work environment for drug distribution, information management, and communication purposes. Few respondents reported experience with handheld computers. Software use experience varied according to application. Although patient-care information software and e-mail were commonly used, experience with spreadsheet, statistical, and presentation software was negligible. The respondents were familiar with Internet search engines, and these were reported to be the most common method of seeking clinical information online. Although many respondents rated themselves as being generally computer literate and not particularly anxious about using computers, the majority believed they required more training to reach their desired level of computer literacy. Lack of familiarity with computer-related terms was prevalent. Self-reported basic computer skill was typically at a moderate level, and varied depending on the task. Specifically, respondents rated their ability to manipulate files, use software help features, and install software as low, but rated their ability to access and navigate the Internet as high. Respondents were generally aware of what online resources were available to them and Clinical Pharmacology was the most commonly employed reference. In terms of anticipated needs, most pharmacists believed they needed to upgrade their computer skills. Medical database and Internet searching skills were identified as those in greatest need of improvement. Conclusions Most pharmacists believed they needed to upgrade their computer skills. Medical database and Internet searching skills were identified as those in greatest need of improvement for the purposes of improving practice effectiveness. PMID:15111277

  10. CD-ROM source data uploaded to the operating and storage devices of an IBM 3090 mainframe through a PC terminal.

    PubMed

    Boros, L G; Lepow, C; Ruland, F; Starbuck, V; Jones, S; Flancbaum, L; Townsend, M C

    1992-07-01

    A powerful method of processing MEDLINE and CINAHL source data uploaded to the IBM 3090 mainframe computer through an IBM/PC is described. Data are first downloaded from the CD-ROM's PC devices to floppy disks. These disks then are uploaded to the mainframe computer through an IBM/PC equipped with WordPerfect text editor and computer network connection (SONNGATE). Before downloading, keywords specifying the information to be accessed are typed at the FIND prompt of the CD-ROM station. The resulting abstracts are downloaded into a file called DOWNLOAD.DOC. The floppy disks containing the information are simply carried to an IBM/PC which has a terminal emulation (TELNET) connection to the university-wide computer network (SONNET) at the Ohio State University Academic Computing Services (OSU ACS). The WordPerfect (5.1) processes and saves the text into DOS format. Using the File Transfer Protocol (FTP, 130,000 bytes/s) of SONNET, the entire text containing the information obtained through the MEDLINE and CINAHL search is transferred to the remote mainframe computer for further processing. At this point, abstracts in the specified area are ready for immediate access and multiple retrieval by any PC having network switch or dial-in connection after the USER ID, PASSWORD and ACCOUNT NUMBER are specified by the user. The system provides the user an on-line, very powerful and quick method of searching for words specifying: diseases, agents, experimental methods, animals, authors, and journals in the research area downloaded. The user can also copy the TItles, AUthors and SOurce with optional parts of abstracts into papers under edition. This arrangement serves the special demands of a research laboratory by handling MEDLINE and CINAHL source data resulting after a search is performed with keywords specified for ongoing projects. Since the Ohio State University has a centrally founded mainframe system, the data upload, storage and mainframe operations are free.

  11. Nursing identity and patient-centredness in scholarly health services research: a computational text analysis of PubMed abstracts 1986-2013.

    PubMed

    Bell, Erica; Campbell, Steve; Goldberg, Lynette R

    2015-01-22

    The most important and contested element of nursing identity may be the patient-centredness of nursing, though this concept is not well-treated in the nursing identity literature. More conceptually-based mapping of nursing identity constructs are needed to help nurses shape their identity. The field of computational text analytics offers new opportunities to scrutinise how growing disciplines such as health services research construct nursing identity. This paper maps the conceptual content of scholarly health services research in PubMed as it relates to the patient-centeredness of nursing. Computational text analytics software was used to analyse all health services abstracts in the database PubMed since 1986. Abstracts were treated as indicative of the content of health services research. The database PubMed was searched for all research papers using the term "service" or "services" in the abstract or keywords for the period 01/01/1986 to 30/06/2013. A total of 234,926 abstracts were obtained. Leximancer software was used in 1) mapping of 4,144,458 instances of 107 concepts; 2) analysis of 106 paired concept co-occurrences for the nursing concept; and 3) sentiment analysis of the nursing concept versus patient, family and community concepts, and clinical concepts. Nursing is constructed within quality assurance or service implementation or workforce development concepts. It is relatively disconnected from patient, family or community care concepts. For those who agree that patient-centredness should be a part of nursing identity in practice, this study suggests that there is a need for development of health services research into both the nature of the caring construct in nursing identity and its expression in practice. More fundamentally, the study raises questions about whether health services research cultures even value the politically popular idea of nurses as patient-centred caregivers and whether they should.

  12. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences

    PubMed Central

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099

  13. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.

    PubMed

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.

  14. Reverse screening methods to search for the protein targets of chemopreventive compounds

    NASA Astrophysics Data System (ADS)

    Huang, Hongbin; Zhang, Guigui; Zhou, Yuquan; Lin, Chenru; Chen, Suling; Lin, Yutong; Mai, Shangkang; Huang, Zunnan

    2018-05-01

    This article is a systematic review of reverse screening methods used to search for the protein targets of chemopreventive compounds or drugs. Typical chemopreventive compounds include components of traditional Chinese medicine, natural compounds and Food and Drug Administration (FDA)-approved drugs. Such compounds are somewhat selective but are predisposed to bind multiple protein targets distributed throughout diverse signaling pathways in human cells. In contrast to conventional virtual screening, which identifies the ligands of a targeted protein from a compound database, reverse screening is used to identify the potential targets or unintended targets of a given compound from a large number of receptors by examining their known ligands or crystal structures. This method, also known as in silico or computational target fishing, is highly valuable for discovering the target receptors of query molecules from terrestrial or marine natural products, exploring the molecular mechanisms of chemopreventive compounds, finding alternative indications of existing drugs by drug repositioning, and detecting adverse drug reactions and drug toxicity. Reverse screening can be divided into three major groups: shape screening, pharmacophore screening and reverse docking. Several large software packages, such as Schrödinger and Discovery Studio; typical software/network services such as ChemMapper, PharmMapper, idTarget and INVDOCK; and practical databases of known target ligands and receptor crystal structures, such as ChEMBL, BindingDB and the Protein Data Bank (PDB), are available for use in these computational methods. Different programs, online services and databases have different applications and constraints. Here, we conducted a systematic analysis and multilevel classification of the computational programs, online services and compound libraries available for shape screening, pharmacophore screening and reverse docking to enable non-specialist users to quickly learn and grasp the types of calculations used in protein target fishing. In addition, we review the main features of these methods, programs and databases and provide a variety of examples illustrating the application of one or a combination of reverse screening methods for accurate target prediction.

  15. Reverse Screening Methods to Search for the Protein Targets of Chemopreventive Compounds.

    PubMed

    Huang, Hongbin; Zhang, Guigui; Zhou, Yuquan; Lin, Chenru; Chen, Suling; Lin, Yutong; Mai, Shangkang; Huang, Zunnan

    2018-01-01

    This article is a systematic review of reverse screening methods used to search for the protein targets of chemopreventive compounds or drugs. Typical chemopreventive compounds include components of traditional Chinese medicine, natural compounds and Food and Drug Administration (FDA)-approved drugs. Such compounds are somewhat selective but are predisposed to bind multiple protein targets distributed throughout diverse signaling pathways in human cells. In contrast to conventional virtual screening, which identifies the ligands of a targeted protein from a compound database, reverse screening is used to identify the potential targets or unintended targets of a given compound from a large number of receptors by examining their known ligands or crystal structures. This method, also known as in silico or computational target fishing, is highly valuable for discovering the target receptors of query molecules from terrestrial or marine natural products, exploring the molecular mechanisms of chemopreventive compounds, finding alternative indications of existing drugs by drug repositioning, and detecting adverse drug reactions and drug toxicity. Reverse screening can be divided into three major groups: shape screening, pharmacophore screening and reverse docking. Several large software packages, such as Schrödinger and Discovery Studio; typical software/network services such as ChemMapper, PharmMapper, idTarget, and INVDOCK; and practical databases of known target ligands and receptor crystal structures, such as ChEMBL, BindingDB, and the Protein Data Bank (PDB), are available for use in these computational methods. Different programs, online services and databases have different applications and constraints. Here, we conducted a systematic analysis and multilevel classification of the computational programs, online services and compound libraries available for shape screening, pharmacophore screening and reverse docking to enable non-specialist users to quickly learn and grasp the types of calculations used in protein target fishing. In addition, we review the main features of these methods, programs and databases and provide a variety of examples illustrating the application of one or a combination of reverse screening methods for accurate target prediction.

  16. Reverse Screening Methods to Search for the Protein Targets of Chemopreventive Compounds

    PubMed Central

    Huang, Hongbin; Zhang, Guigui; Zhou, Yuquan; Lin, Chenru; Chen, Suling; Lin, Yutong; Mai, Shangkang; Huang, Zunnan

    2018-01-01

    This article is a systematic review of reverse screening methods used to search for the protein targets of chemopreventive compounds or drugs. Typical chemopreventive compounds include components of traditional Chinese medicine, natural compounds and Food and Drug Administration (FDA)-approved drugs. Such compounds are somewhat selective but are predisposed to bind multiple protein targets distributed throughout diverse signaling pathways in human cells. In contrast to conventional virtual screening, which identifies the ligands of a targeted protein from a compound database, reverse screening is used to identify the potential targets or unintended targets of a given compound from a large number of receptors by examining their known ligands or crystal structures. This method, also known as in silico or computational target fishing, is highly valuable for discovering the target receptors of query molecules from terrestrial or marine natural products, exploring the molecular mechanisms of chemopreventive compounds, finding alternative indications of existing drugs by drug repositioning, and detecting adverse drug reactions and drug toxicity. Reverse screening can be divided into three major groups: shape screening, pharmacophore screening and reverse docking. Several large software packages, such as Schrödinger and Discovery Studio; typical software/network services such as ChemMapper, PharmMapper, idTarget, and INVDOCK; and practical databases of known target ligands and receptor crystal structures, such as ChEMBL, BindingDB, and the Protein Data Bank (PDB), are available for use in these computational methods. Different programs, online services and databases have different applications and constraints. Here, we conducted a systematic analysis and multilevel classification of the computational programs, online services and compound libraries available for shape screening, pharmacophore screening and reverse docking to enable non-specialist users to quickly learn and grasp the types of calculations used in protein target fishing. In addition, we review the main features of these methods, programs and databases and provide a variety of examples illustrating the application of one or a combination of reverse screening methods for accurate target prediction. PMID:29868550

  17. Personalization of Rule-based Web Services.

    PubMed

    Choi, Okkyung; Han, Sang Yong

    2008-04-04

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

  18. Computer Information Retrieval for Journalists.

    ERIC Educational Resources Information Center

    Rodewald, Pam

    1989-01-01

    Discusses the use of computer information retrieval (on-line electronic search methods). Examines advantages and disadvantages of on-line searching versus manual searching. Offers questions to help in the decision to purchase and use on-line searching with students. (MS)

  19. Advances in Grid Computing for the FabrIc for Frontier Experiments Project at Fermialb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herner, K.; Alba Hernandex, A. F.; Bhat, S.

    The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientic Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of diering size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certicate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have signicantly matured, and present an increasinglymore » complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the eorts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production work ows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular work ows, and support troubleshooting and triage in case of problems. Recently a new certicate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specic third-party Certicate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.« less

  20. Advances in Grid Computing for the Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Herner, K.; Alba Hernandez, A. F.; Bhat, S.; Box, D.; Boyd, J.; Di Benedetto, V.; Ding, P.; Dykstra, D.; Fattoruso, M.; Garzoglio, G.; Kirby, M.; Kreymer, A.; Levshina, T.; Mazzacane, A.; Mengel, M.; Mhashilkar, P.; Podstavkov, V.; Retzke, K.; Sharma, N.; Teheran, J.

    2017-10-01

    The Fabric for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certificate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have significantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the efforts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production workflows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular workflows, and support troubleshooting and triage in case of problems. Recently a new certificate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specific third-party Certificate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.

  1. Federating Metadata Catalogs

    NASA Astrophysics Data System (ADS)

    Baru, C.; Lin, K.

    2009-04-01

    The Geosciences Network project (www.geongrid.org) has been developing cyberinfrastructure for data sharing in the Earth Science community based on a service-oriented architecture. The project defines a standard "software stack", which includes a standardized set of software modules and corresponding service interfaces. The system employs Grid certificates for distributed user authentication. The GEON Portal provides online access to these services via a set of portlets. This service-oriented approach has enabled the GEON network to easily expand to new sites and deploy the same infrastructure in new projects. To facilitate interoperation with other distributed geoinformatics environments, service standards are being defined and implemented for catalog services and federated search across distributed catalogs. The need arises because there may be multiple metadata catalogs in a distributed system, for example, for each institution, agency, geographic region, and/or country. Ideally, a geoinformatics user should be able to search across all such catalogs by making a single search request. In this paper, we describe our implementation for such a search capability across federated metadata catalogs in the GEON service-oriented architecture. The GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, thus, can be imbedded in any software application. The need for federated catalogs in GEON arises because, (i) GEON collaborators at the University of Hyderabad, India have deployed their own catalog, as part of the iGEON-India effort, to register information about local resources for broader access across the network, (ii) GEON collaborators in the GEO Grid (Global Earth Observations Grid) project at AIST, Japan have implemented a catalog for their ASTER data products, and (iii) we have recently deployed a search service to access all data products from the EarthScope project in the US (http://es-portal.geongrid.org), which are distributed across data archives at IRIS in Seattle, Washington, UNAVCO in Boulder, Colorado, and at the ICDP archives in GFZ, Potsdam, Germany. This service implements a "virtual" catalog--the actual/"physical" catalogs and data are stored at each of the remote locations. A federated search across all these catalogs would enable GEON users to discover data across all of these environments with a single search request. Our objective is to implement this search service via the OGC Catalog Services for the Web (CS-W) standard by providing appropriate CSW "wrappers" for each metadata catalog, as necessary. This paper will discuss technical issues in designing and deploying such a multi-catalog search service in GEON and describe an initial prototype of the federated search capability.

  2. Enhancing UCSF Chimera through web services

    PubMed Central

    Huang, Conrad C.; Meng, Elaine C.; Morris, John H.; Pettersen, Eric F.; Ferrin, Thomas E.

    2014-01-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. PMID:24861624

  3. BingEO: Enable Distributed Earth Observation Data for Environmental Research

    NASA Astrophysics Data System (ADS)

    Wu, H.; Yang, C.; Xu, Y.

    2010-12-01

    Our planet is facing great environmental challenges including global climate change, environmental vulnerability, extreme poverty, and a shortage of clean cheap energy. To address these problems, scientists are developing various models to analysis, forecast, simulate various geospatial phenomena to support critical decision making. These models not only challenge our computing technology, but also challenge us to feed huge demands of earth observation data. Through various policies and programs, open and free sharing of earth observation data are advocated in earth science. Currently, thousands of data sources are freely available online through open standards such as Web Map Service (WMS), Web Feature Service (WFS) and Web Coverage Service (WCS). Seamless sharing and access to these resources call for a spatial Cyberinfrastructure (CI) to enable the use of spatial data for the advancement of related applied sciences including environmental research. Based on Microsoft Bing Search Engine and Bing Map, a seamlessly integrated and visual tool is under development to bridge the gap between researchers/educators and earth observation data providers. With this tool, earth science researchers/educators can easily and visually find the best data sets for their research and education. The tool includes a registry and its related supporting module at server-side and an integrated portal as its client. The proposed portal, Bing Earth Observation (BingEO), is based on Bing Search and Bing Map to: 1) Use Bing Search to discover Web Map Services (WMS) resources available over the internet; 2) Develop and maintain a registry to manage all the available WMS resources and constantly monitor their service quality; 3) Allow users to manually register data services; 4) Provide a Bing Maps-based Web application to visualize the data on a high-quality and easy-to-manipulate map platform and enable users to select the best data layers online. Given the amount of observation data accumulated already and still growing, BingEO will allow these resources to be utilized more widely, intensively, efficiently and economically in earth science applications.

  4. MR-Tandem: parallel X!Tandem using Hadoop MapReduce on Amazon Web Services

    PubMed Central

    Pratt, Brian; Howbert, J. Jeffry; Tasman, Natalie I.; Nilsson, Erik J.

    2012-01-01

    Summary: MR-Tandem adapts the popular X!Tandem peptide search engine to work with Hadoop MapReduce for reliable parallel execution of large searches. MR-Tandem runs on any Hadoop cluster but offers special support for Amazon Web Services for creating inexpensive on-demand Hadoop clusters, enabling search volumes that might not otherwise be feasible with the compute resources a researcher has at hand. MR-Tandem is designed to drop in wherever X!Tandem is already in use and requires no modification to existing X!Tandem parameter files, and only minimal modification to X!Tandem-based workflows. Availability and implementation: MR-Tandem is implemented as a lightly modified X!Tandem C++ executable and a Python script that drives Hadoop clusters including Amazon Web Services (AWS) Elastic Map Reduce (EMR), using the modified X!Tandem program as a Hadoop Streaming mapper and reducer. The modified X!Tandem C++ source code is Artistic licensed, supports pluggable scoring, and is available as part of the Sashimi project at http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/trans_proteomic_pipeline/extern/xtandem/. The MR-Tandem Python script is Apache licensed and available as part of the Insilicos Cloud Army project at http://ica.svn.sourceforge.net/viewvc/ica/trunk/mr-tandem/. Full documentation and a windows installer that configures MR-Tandem, Python and all necessary packages are available at this same URL. Contact: brian.pratt@insilicos.com PMID:22072385

  5. Visualization of Pulsar Search Data

    NASA Astrophysics Data System (ADS)

    Foster, R. S.; Wolszczan, A.

    1993-05-01

    The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.

  6. Mercury- Distributed Metadata Management, Data Discovery and Access System

    NASA Astrophysics Data System (ADS)

    Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.

    2007-12-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  7. 2005 8th Annual Systems Engineering Conference. Volume 1, Tuesday

    DTIC Science & Technology

    2005-10-27

    Services NCES Discovery Services Federated Search Denotes interface Service NCCP Oktoberfest 2004 101 Task: Global Strike Mission Planning NCCP Oktoberfest...Enterprise Service Management Security Services NCES Discovery Services Federated Search Service Test, cert and accreditation needs to be focused on small

  8. Chapter 51: How to Build a Simple Cone Search Service Using a Local Database

    NASA Astrophysics Data System (ADS)

    Kent, B. R.; Greene, G. R.

    The cone search service protocol will be examined from the server side in this chapter. A simple cone search service will be setup and configured locally using MySQL. Data will be read into a table, and the Java JDBC will be used to connect to the database. Readers will understand the VO cone search specification and how to use it to query a database on their local systems and return an XML/VOTable file based on an input of RA/DEC coordinates and a search radius. The cone search in this example will be deployed as a Java servlet. The resulting cone search can be tested with a verification service. This basic setup can be used with other languages and relational databases.

  9. Does Opting into a Search Service Provide Benefits to Students? ACT Working Paper 2017-3

    ERIC Educational Resources Information Center

    Moore, Joann L.; Cruce, Ty

    2017-01-01

    Recent research suggests that the use of student search services is an effective part of a college's student marketing and recruitment strategy. What is not clear, however, is whether participating in a search service is an effective part of a student's college search strategy. To address this question, we exploit a recent change in the choice…

  10. 11 CFR 4.9 - Fees.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... requester for the first two hours of search time and the first 100 pages of duplication in response to any FOIA request. (2) Free computer search time. For purposes of this paragraph, the term search time is based on the concept of a manual search. To apply this to a search conducted by a computer, the...

  11. 11 CFR 4.9 - Fees.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... requester for the first two hours of search time and the first 100 pages of duplication in response to any FOIA request. (2) Free computer search time. For purposes of this paragraph, the term search time is based on the concept of a manual search. To apply this to a search conducted by a computer, the...

  12. 11 CFR 4.9 - Fees.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... requester for the first two hours of search time and the first 100 pages of duplication in response to any FOIA request. (2) Free computer search time. For purposes of this paragraph, the term search time is based on the concept of a manual search. To apply this to a search conducted by a computer, the...

  13. 12 CFR 602.12 - Fees.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... of the employee doing the work. (2) For computer searches for records, the direct costs of computer... $15.00. Fee Amounts Table Type of fee Amount of fee Manual Search and Review Pro rated Salary Costs. Computer Search Direct Costs. Photocopy $0.15 a page. Other Reproduction Costs Direct Costs. Elective...

  14. The IBM PC as an Online Search Machine--Part 2: Physiology for Searchers.

    ERIC Educational Resources Information Center

    Kolner, Stuart J.

    1985-01-01

    Enumerates "hardware problems" associated with use of the IBM personal computer as an online search machine: purchase of machinery, unpacking of parts, and assembly into a properly functioning computer. Components that allow transformations of computer into a search machine (combination boards, printer, modem) and diagnostics software…

  15. Searching for New Double Stars with a Computer

    NASA Astrophysics Data System (ADS)

    Bryant, T. V.

    2015-04-01

    The advent of computers with large amounts of RAM memory and fast processors, as well as easy internet access to large online astronomical databases, has made computer searches based on astrometric data practicable for most researchers. This paper describes one such search that has uncovered hitherto unrecognized double stars.

  16. A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks

    PubMed Central

    Hammad, Karim; El Bakly, Ahmed M.

    2018-01-01

    A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem—subject to various Quality-of-Service (QoS) constraints—represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms. PMID:29509760

  17. A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks.

    PubMed

    Ramadan, Rahab M; Gasser, Safa M; El-Mahallawy, Mohamed S; Hammad, Karim; El Bakly, Ahmed M

    2018-01-01

    A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem-subject to various Quality-of-Service (QoS) constraints-represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms.

  18. Doing Your Science While You're in Orbit

    NASA Astrophysics Data System (ADS)

    Green, Mark L.; Miller, Stephen D.; Vazhkudai, Sudharshan S.; Trater, James R.

    2010-11-01

    Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.

  19. Proceedings of Colloquium 110 of the International Astronomical Union on Library and Information Services in Astronomy

    NASA Astrophysics Data System (ADS)

    Wilkins, George A.; Stevens-Rayburn, Sarah

    This report provides an overview of the presentations and summaries of discussions at IAU Colloquium 110, which was held in Washington, D.C., on 26-30 July 1988 and at the Goddard Space Flight Center on 1 August 1988. The topics included: the publication and acquisition of books and journals; searching for astronomical information; the handling and use of special-format materials; conservation; archiving of unpublished documents; uses of computers in libraries; astronomical databases and various aspects of the administration of astronomy libraries and services. Particular attention was paid to new developments, but the problems of astronomers and institutions in developing countries were also considered.

  20. Exhaustive Versus Randomized Searchers for Nonlinear Optimization in 21st Century Computing: Solar Application

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; AliShaykhian, Gholam

    2010-01-01

    We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.

  1. The impact of user fees on access to health services in low- and middle-income countries.

    PubMed

    Lagarde, Mylene; Palmer, Natasha

    2011-04-13

    Following an international push for financing reforms, many low- and middle-income countries introduced user fees to raise additional revenue for health systems. User fees are charges levied at the point of use and are supposed to help reduce 'frivolous' consumption of health services, increase quality of services available and, as a result, increase utilisation of services. To assess the effectiveness of introducing, removing or changing user fees to improve access to care in low-and middle-income countries We searched 25 international databases, including the Cochrane Effective Practice and Organisation of Care (EPOC) Group's Trials Register, CENTRAL, MEDLINE and EMBASE. We also searched the websites and online resources of international agencies, organisations and universities to find relevant grey literature. We conducted the original searches between November 2005 and April 2006 and the updated search in CENTRAL (DVD-ROM 2011, Issue 1); MEDLINE In-Process & Other Non-Indexed Citations, Ovid (January 25, 2011); MEDLINE, Ovid (1948 to January Week 2 2011); EMBASE, Ovid (1980 to 2011 Week 03) and EconLit, CSA Illumina (1969 - present) on the 26th of January 2011. We included randomised controlled trials, interrupted time-series studies and controlled before-and-after studies that reported an objective measure of at least one of the following outcomes: healthcare utilisation, health expenditures, or health outcomes. We re-analysed studies with longitudinal data. We computed price elasticities of demand for health services in controlled before-and-after studies as a standardised measure. Due to the diversity of contexts and outcome measures, we did not perform meta-analysis. Instead, we undertook a narrative summary of evidence. We included 16 studies out of the 243 identified. Most of the included studies showed methodological weaknesses that hamper the strength and reliability of their findings. When fees were introduced or increased, we found the use of health services decreased significantly in most studies. Two studies found increases in health service use when quality improvements were introduced at the same time as user fees. However, these studies have a high risk of bias. We found no evidence of effects on health outcomes or health expenditure. The review suggests that reducing or removing user fees increases the utilisation of certain healthcare services. However, emerging evidence suggests that such a change may have unintended consequences on utilisation of preventive services and service quality. The review also found that introducing or increasing fees can have a negative impact on health services utilisation, although some evidence suggests that when implemented with quality improvements these interventions could be beneficial. Most of the included studies suffered from important methodological weaknesses. More rigorous research is needed to inform debates on the desirability and effects of user fees.

  2. Cloud Computing for Protein-Ligand Binding Site Comparison

    PubMed Central

    2013-01-01

    The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery. PMID:23762824

  3. Cloud computing for protein-ligand binding site comparison.

    PubMed

    Hung, Che-Lun; Hua, Guan-Jie

    2013-01-01

    The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery.

  4. 12 CFR 792.19 - How does NCUA calculate the fees for processing my request?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED.... Searches may be done manually or by computer. Search does not include modification of an existing program... cost of operating the computer for computer searches for records. (c) NCUA will charge the following...

  5. 12 CFR 792.19 - How does NCUA calculate the fees for processing my request?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED.... Searches may be done manually or by computer. Search does not include modification of an existing program... cost of operating the computer for computer searches for records. (c) NCUA will charge the following...

  6. 12 CFR 792.19 - How does NCUA calculate the fees for processing my request?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED.... Searches may be done manually or by computer. Search does not include modification of an existing program... cost of operating the computer for computer searches for records. (c) NCUA will charge the following...

  7. 12 CFR 792.19 - How does NCUA calculate the fees for processing my request?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED.... Searches may be done manually or by computer. Search does not include modification of an existing program... cost of operating the computer for computer searches for records. (c) NCUA will charge the following...

  8. 12 CFR 792.19 - How does NCUA calculate the fees for processing my request?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED.... Searches may be done manually or by computer. Search does not include modification of an existing program... cost of operating the computer for computer searches for records. (c) NCUA will charge the following...

  9. Optimized blind gamma-ray pulsar searches at fixed computing budget

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pletsch, Holger J.; Clark, Colin J., E-mail: holger.pletsch@aei.mpg.de

    The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fullymore » coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.« less

  10. Self Reported Behavioral Health Habits and Other Health Issues Influencing Capabilities and Mission Readiness of Combat Search and Rescue Personnel

    DTIC Science & Technology

    2017-02-23

    percentages for increased poor health habits, healthcare utilization, and medication usage were computed using the group n as the denominator instead of the n... survey to assess for general areas of health-related behaviors (i.e., sleep and exercise; alcohol, tobacco, and caffeine use; common reasons for seeking...medical care and mental health support services; and reasons for increased prescription and over-the-counter medication usage ) relevant to

  11. Resource Management In Peer-To-Peer Networks: A Nadse Approach

    NASA Astrophysics Data System (ADS)

    Patel, R. B.; Garg, Vishal

    2011-12-01

    This article presents a common solution to Peer-to-Peer (P2P) network problems and distributed computing with the help of "Neighbor Assisted Distributed and Scalable Environment" (NADSE). NADSE supports both device and code mobility. In this article mainly we focus on the NADSE based resource management technique. How information dissemination and searching is speedup when using the NADSE service provider node in large network. Results show that performance of the NADSE network is better in comparison to Gnutella, and Freenet.

  12. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  13. Prediction of transits of Solar system objects in Kepler/K2 images: an extension of the Virtual Observatory service SkyBoT

    NASA Astrophysics Data System (ADS)

    Berthier, J.; Carry, B.; Vachier, F.; Eggl, S.; Santerne, A.

    2016-05-01

    All the fields of the extended space mission Kepler/K2 are located within the ecliptic. Many Solar system objects thus cross the K2 stellar masks on a regular basis. We aim at providing to the entire community a simple tool to search and identify Solar system objects serendipitously observed by Kepler. The sky body tracker (SkyBoT) service hosted at Institut de mécanique céleste et de calcul des éphémérides provides a Virtual Observatory compliant cone search that lists all Solar system objects present within a field of view at a given epoch. To generate such a list in a timely manner, ephemerides are pre-computed, updated weekly, and stored in a relational data base to ensure a fast access. The SkyBoT web service can now be used with Kepler. Solar system objects within a small (few arcminutes) field of view are identified and listed in less than 10 s. Generating object data for the entire K2 field of view (14°) takes about a minute. This extension of the SkyBoT service opens new possibilities with respect to mining K2 data for Solar system science, as well as removing Solar system objects from stellar photometric time series.

  14. MEDLINE SDI services: how do they compare?*

    PubMed Central

    Shultz, Mary; De Groote, Sandra L.

    2003-01-01

    Introduction: Selective dissemination of information (SDI) services regularly alert users to new information on their chosen topics. This type of service can increase a user's ability to keep current and may have a positive impact on efficiency and productivity. Currently, there are many venues available where users can establish, store, and automatically run MEDLINE searches. Purpose: To describe, evaluate, and compare SDI services for MEDLINE. Resources: The following SDI services were selected for this study: PubMed Cubby, BioMail, JADE, PubCrawler, OVID, and ScienceDirect. Methodology: Identical searches were established in four of the six selected SDI services and were run on a weekly basis over a period of two months. Eight search strategies were used in each system to test performance under various search conditions. The PubMed Cubby system was used as the baseline against which the other systems were compared. Other aspects were evaluated in all six services and include ease of use, frequency of results, ability to use MeSH, ability to access and edit existing search strategies, and ability to download to a bibliographic management program. Results: Not all MEDLINE SDI services retrieve identical results, even when identical search strategies are used. This study also showed that the services vary in terms of features and functions offered. PMID:14566377

  15. EIIS: An Educational Information Intelligent Search Engine Supported by Semantic Services

    ERIC Educational Resources Information Center

    Huang, Chang-Qin; Duan, Ru-Lin; Tang, Yong; Zhu, Zhi-Ting; Yan, Yong-Jian; Guo, Yu-Qing

    2011-01-01

    The semantic web brings a new opportunity for efficient information organization and search. To meet the special requirements of the educational field, this paper proposes an intelligent search engine enabled by educational semantic support service, where three kinds of searches are integrated into Educational Information Intelligent Search (EIIS)…

  16. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  17. Cyberinfrastructure at IRIS: Challenges and Solutions Providing Integrated Data Access to EarthScope and Other Earth Science Data

    NASA Astrophysics Data System (ADS)

    Ahern, T. K.; Barga, R.; Casey, R.; Kamb, L.; Parastatidis, S.; Stromme, S.; Weertman, B. T.

    2008-12-01

    While mature methods of accessing seismic data from the IRIS DMC have existed for decades, the demands for improved interdisciplinary data integration call for new approaches. Talented software teams at the IRIS DMC, UNAVCO and the ICDP in Germany, have been developing web services for all EarthScope data including data from USArray, PBO and SAFOD. These web services are based upon SOAP and WSDL. The EarthScope Data Portal was the first external system to access data holdings from the IRIS DMC using Web Services. EarthScope will also draw more heavily upon products to aid in cross-disciplinary data reuse. A Product Management System called SPADE allows archive of and access to heterogeneous data products, presented as XML documents, at the IRIS DMC. Searchable metadata are extracted from the XML and enable powerful searches for products from EarthScope and other data sources. IRIS is teaming with the External Research Group at Microsoft Research to leverage a powerful Scientific Workflow Engine (Trident) and interact with the web services developed at centers such as IRIS to enable access to data services as well as computational services. We believe that this approach will allow web- based control of workflows and the invocation of computational services that transform data. This capability will greatly improve access to data across scientific disciplines. This presentation will review some of the traditional access tools as well as many of the newer approaches that use web services, scientific workflow to improve interdisciplinary data access.

  18. Query Language for Location-Based Services: A Model Checking Approach

    NASA Astrophysics Data System (ADS)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  19. 17 CFR Appendix B to Part 145 - Schedule of Fees

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... professional personnel in searching or reviewing records. (3) When searches require the expertise of a computer... shared access network servers, the computer processing time is included in the search time for the staff... equivalent of two hours of professional search time. (d) Aggregation of requests. For purposes of determining...

  20. An MPI + $X$ implementation of contact global search using Kokkos

    DOE PAGES

    Hansen, Glen A.; Xavier, Patrick G.; Mish, Sam P.; ...

    2015-10-05

    This paper describes an approach that seeks to parallelize the spatial search associated with computational contact mechanics. In contact mechanics, the purpose of the spatial search is to find “nearest neighbors,” which is the prelude to an imprinting search that resolves the interactions between the external surfaces of contacting bodies. In particular, we are interested in the contact global search portion of the spatial search associated with this operation on domain-decomposition-based meshes. Specifically, we describe an implementation that combines standard domain-decomposition-based MPI-parallel spatial search with thread-level parallelism (MPI-X) available on advanced computer architectures (those with GPU coprocessors). Our goal ismore » to demonstrate the efficacy of the MPI-X paradigm in the overall contact search. Standard MPI-parallel implementations typically use a domain decomposition of the external surfaces of bodies within the domain in an attempt to efficiently distribute computational work. This decomposition may or may not be the same as the volume decomposition associated with the host physics. The parallel contact global search phase is then employed to find and distribute surface entities (nodes and faces) that are needed to compute contact constraints between entities owned by different MPI ranks without further inter-rank communication. Key steps of the contact global search include computing bounding boxes, building surface entity (node and face) search trees and finding and distributing entities required to complete on-rank (local) spatial searches. To enable source-code portability and performance across a variety of different computer architectures, we implemented the algorithm using the Kokkos hardware abstraction library. While we targeted development towards machines with a GPU accelerator per MPI rank, we also report performance results for OpenMP with a conventional multi-core compute node per rank. Results here demonstrate a 47 % decrease in the time spent within the global search algorithm, comparing the reference ACME algorithm with the GPU implementation, on an 18M face problem using four MPI ranks. As a result, while further work remains to maximize performance on the GPU, this result illustrates the potential of the proposed implementation.« less

  1. Description of individual data items and codes in CRIB

    USGS Publications Warehouse

    Keefer, Eleanor K.; Calkins, James Alfred

    1978-01-01

    The U.S. Geological Survey's Computerized Resources Information Bank (CRIB) is being made available for public use through the computer facilities of the University of Oklahoma and the General Electric Company, U.S.A. The use of General Electric's worldwide information-services network provides access to the CRIB file to a worldwide clientele. This manual, which consists of two chapters, is intended as a guide to users who wish to interrogate the file. Chapter A contains a description of the CRIB file, information on the use of the GIPSY retrieval system, and a description of the General Electric MARK III Service. Chapter B contains a description of the individual data items in the CRIB record as well as code lists. CRIB consists of a set of variable-length records on the metallic and nonmetallic mineral resources of the United States and other countries. At present, 31,645 records in the master file are being made available. The record contains information on mineral deposits and mineral commodities. Some topics covered are: deposit name, location, commodity information, description of deposit, geology, production, reserves, potential resources, and references. The data are processed by the GIPSY program, which maintains the data file and builds, updates, searches, and prints the records using simple yet versatile command statements. Searching and selecting records is accomplished by specifying the presence, absence, or content of any element of information in the record; these specifications can be logically linked to prepare sophisticated search strategies. Output is available in the form of the complete record, a listing of selected parts of the record, or fixed-field tabulations. The General Electric MARK III Service is a computerized information services network operating internationally by land lines, satellites, and undersea cables. The service is available by local telephone to 500 cities in North America, Western Europe, Australia, Southeast Asia, Japan, and Saudi Arabia. An interface called the 'foreground driver' is used to link the GIPSY program to the General Electric system.

  2. The Impact of Online Bibliographic Databases on Teaching and Research in Political Science.

    ERIC Educational Resources Information Center

    Reichel, Mary

    The availability of online bibliographic databases greatly facilitates literature searching in political science. The advantages to searching databases online include combination of concepts, comprehensiveness, multiple database searching, free-text searching, currency, current awareness services, document delivery service, and convenience.…

  3. The Pricing of Information--A Search-Based Approach to Pricing an Online Search Service.

    ERIC Educational Resources Information Center

    Boyle, Harry F.

    1982-01-01

    Describes innovative pricing structure consisting of low connect time fee, print fees, and search fees, offered by Chemical Abstracts Service (CAS) ONLINE--an online searching system used to locate chemical substances. Pricing options considered by CAS, the search-based pricing approach, and users' reactions to pricing structures are noted. (EJS)

  4. Llnking the EarthScope Data Virtual Catalog to the GEON Portal

    NASA Astrophysics Data System (ADS)

    Lin, K.; Memon, A.; Baru, C.

    2008-12-01

    The EarthScope Data Portal provides a unified, single-point of access to EarthScope data and products from USArray, Plate Boundary Observatory (PBO), and San Andreas Fault Observatory at Depth (SAFOD) experiments. The portal features basic search and data access capabilities to allow users to discover and access EarthScope data using spatial, temporal, and other metadata-based (data type, station specific) search conditions. The portal search module is the user interface implementation of the EarthScope Data Search Web Service. This Web Service acts as a virtual catalog that in turn invokes Web services developed by IRIS (Incorporated Research Institutions for Seismology), UNAVCO (University NAVSTAR Consortium), and GFZ (German Research Center for Geosciences) to search for EarthScope data in the archives at each of these locations. These Web Services provide information about all resources (data) that match the specified search conditions. In this presentation we will describe how the EarthScope Data Search Web service can be integrated into the GEONsearch application in the GEON Portal (see http://portal.geongrid.org). Thus, a search request issued at the GEON Portal will also search the EarthScope virtual catalog thereby providing users seamless access to data in GEON as well as the Earthscope via a common user interface.

  5. Computer Use of a Medical Dictionary to Select Search Words.

    ERIC Educational Resources Information Center

    O'Connor, John

    1986-01-01

    Explains an experiment in text-searching retrieval for cancer questions which developed and used computer procedures (via human simulation) to select search words from medical dictionaries. This study is based on an earlier one in which search words were humanly selected, and the recall results of the two studies are compared. (Author/LRW)

  6. Capabilities in Context: Evaluating the Net-Centric Enterprise

    DTIC Science & Technology

    2009-03-01

    with an intuitive keyword search using the enterprise’s federated search capability. Service accessibility. Testers will ensure that local service has...search using the enterprise’s federated search capability. Data accessibility. Testers will ensure that Feder- ated Search results provide active link...user may request access to the data, and be available within ‘‘2 clicks’’ from the active link provided by Federated Search . Data understandability

  7. Enhancing UCSF Chimera through web services.

    PubMed

    Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E

    2014-07-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Gaia Data Release 1. The archive visualisation service

    NASA Astrophysics Data System (ADS)

    Moitinho, A.; Krone-Martins, A.; Savietto, H.; Barros, M.; Barata, C.; Falcão, A. J.; Fernandes, T.; Alves, J.; Silva, A. F.; Gomes, M.; Bakker, J.; Brown, A. G. A.; González-Núñez, J.; Gracia-Abril, G.; Gutiérrez-Sánchez, R.; Hernández, J.; Jordan, S.; Luri, X.; Merin, B.; Mignard, F.; Mora, A.; Navarro, V.; O'Mullane, W.; Sagristà Sellés, T.; Salgado, J.; Segovia, J. C.; Utrilla, E.; Arenou, F.; de Bruijne, J. H. J.; Jansen, F.; McCaughrean, M.; O'Flaherty, K. S.; Taylor, M. B.; Vallenari, A.

    2017-09-01

    Context. The first Gaia data release (DR1) delivered a catalogue of astrometry and photometry for over a billion astronomical sources. Within the panoplyof methods used for data exploration, visualisation is often the starting point and even the guiding reference for scientific thought. However, this is a volume of data that cannot be efficiently explored using traditional tools, techniques, and habits. Aims: We aim to provide a global visual exploration service for the Gaia archive, something that is not possible out of the box for most people. The service has two main goals. The first is to provide a software platform for interactive visual exploration of the archive contents, using common personal computers and mobile devices available to most users. The second aim is to produce intelligible and appealing visual representations of the enormous information content of the archive. Methods: The interactive exploration service follows a client-server design. The server runs close to the data, at the archive, and is responsible for hiding as far as possible the complexity and volume of the Gaia data from the client. This is achieved by serving visual detail on demand. Levels of detail are pre-computed using data aggregation and subsampling techniques. For DR1, the client is a web application that provides an interactive multi-panel visualisation workspace as well as a graphical user interface. Results: The Gaia archive Visualisation Service offers a web-based multi-panel interactive visualisation desktop in a browser tab. It currently provides highly configurable 1D histograms and 2D scatter plots of Gaia DR1 and the Tycho-Gaia Astrometric Solution (TGAS) with linked views. An innovative feature is the creation of ADQL queries from visually defined regions in plots. These visual queries are ready for use in the Gaia Archive Search/data retrieval service. In addition, regions around user-selected objects can be further examined with automatically generated SIMBAD searches. Integration of the Aladin Lite and JS9 applications add support to the visualisation of HiPS and FITS maps. The production of the all-sky source density map that became the iconic image of Gaia DR1 is described in detail. Conclusions: On the day of DR1, over seven thousand users accessed the Gaia Archive visualisation portal. The system, running on a single machine, proved robust and did not fail while enabling thousands of users to visualise and explore the over one billion sources in DR1. There are still several limitations, most noticeably that users may only choose from a list of pre-computed visualisations. Thus, other visualisation applications that can complement the archive service are examined. Finally, development plans for Data Release 2 are presented.

  9. Implementing the Army NetCentric Data Strategy in a ServiceOriented Environment

    DTIC Science & Technology

    2009-04-23

    a Data Subscriptionc c e s s Federated Search Data Search D a t a A b s t r a c t i o n Adapter Configuration Adapter Data Service D a t a S e r...across t e enterpr se.  • Patterns • Search • Status • Receive – Services • Federated   Search • Artifact Discovery • Data Discovery 17 Data Discovery

  10. Selecting the Administrative Computing Executive.

    ERIC Educational Resources Information Center

    Bielec, John A.

    1985-01-01

    Important steps in the computing administrator selection process are outlined, including: reviewing the administrative computing organization, determining a search methodology, selecting a search or screening committee, narrowing the candidate pool, scheduling interviews and evaluating candidates, and conducting negotiations. (MSE)

  11. Introducing Online Bibliographic Service to its Users: The Online Presentation

    ERIC Educational Resources Information Center

    Crane, Nancy B.; Pilachowski, David M.

    1978-01-01

    A description of techniques for introducing online services to new user groups includes discussion of terms and their definitions, evolution of online searching, advantages and disadvantages of online searching, production of the data bases, search strategies, Boolean logic, costs and charges, "do's and don'ts," and a user search questionnaire. (J…

  12. 75 FR 12174 - Proposed Information Collection; Comment Request; AGE Search Service

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-15

    ... DEPARTMENT OF COMMERCE Census Bureau Proposed Information Collection; Comment Request; AGE Search...: I. Abstract Age Search is a service provided by the U.S. Census Bureau for persons who need official... family relationship for rights of inheritance. The Age Search forms are used by the public in order to...

  13. Use of the Internet by burns patients, their families and friends.

    PubMed

    Rea, S; Lim, J; Falder, S; Wood, F

    2008-05-01

    The Internet has also become an increasingly important source of health-related information. However, with this exponential increase comes the problem that although the volume of information is huge, the quality, accuracy and completeness of the information are questionable, not only in the field of medicine. Previous studies of single medical conditions have suggested that web-based health information has limitations. The aim of this study was to evaluate Internet usage among burned patients and the people accompanying them to the outpatient clinic. A customised questionnaire was created and distributed to all patients and accompanying persons in the adult and paediatric burns clinics. This investigated computer usage, Internet access, usefulness of Internet search and topics searched. Two hundred and ten people completed the questionnaire, a response rate of 83%. Sixty three percent of responders were patients, parents 21.9%, spouses 3.3%, siblings, children and friends the remaining 10.8%. Seventy seven percent of attendees had been injured within the last year, 11% between 1 and 5 years previously, and 12% more than 5 years previously. Seventy four percent had computer and Internet access. Twelve percent had performed a search. Topics searched included skin grafts, scarring and scar management treatments such as pressure garments, silicone gel and massage. This study has shown that computer and Internet access is high, however a very small number actually used the Internet to access further medical information. Patients with longer standing injuries were more likely to access the Internet. Parents of burned children were more frequent Internet users. As more burn units develop their own web sites with information for patients and healthcare providers, it is important to inform patients, family members and friends that such a resource exists. By offering such a service patients are provided with accurate, reliable and easily accessible information which is appropriate to their needs.

  14. Integrating Multilevel Command and Control into a Service Oriented Architecture to Provide Cross Domain Capability

    DTIC Science & Technology

    2006-06-01

    Horizontal Fusion, the JCDX team developed two web services, a Classification Policy Decision Service (cPDS), and a Federated Search Provider (FSP...The cPDS web service primarily provides other systems with methods for handling labeled data such as label comparison. The federated search provider...level domains. To provide defense-in-depth, cPDS and the Federated Search Provider are implemented on a separate server known as the JCDX Web

  15. A review on quantum search algorithms

    NASA Astrophysics Data System (ADS)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  16. 45 CFR 2552.27 - What two search components of the National Service Criminal History Check must I satisfy to...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Criminal History Check must I satisfy to determine an individual's suitability to serve in a covered... a Sponsor § 2552.27 What two search components of the National Service Criminal History Check must I... conduct and document a National Service Criminal History Check, which consists of the following two search...

  17. Effects of librarian-provided services in healthcare settings: a systematic review

    PubMed Central

    Perrier, Laure; Farrell, Ann; Ayala, A Patricia; Lightfoot, David; Kenny, Tim; Aaronson, Ellen; Allee, Nancy; Brigham, Tara; Connor, Elizabeth; Constantinescu, Teodora; Muellenbach, Joanne; Epstein, Helen-Ann Brown; Weiss, Ardis

    2014-01-01

    Objective To assess the effects of librarian-provided services in healthcare settings on patient, healthcare provider, and researcher outcomes. Materials and methods Medline, CINAHL, ERIC, LISA (Library and Information Science Abstracts), and the Cochrane Central Register of Controlled Trials were searched from inception to June 2013. Studies involving librarian-provided services for patients encountering the healthcare system, healthcare providers, or researchers were eligible for inclusion. All librarian-provided services in healthcare settings were considered as an intervention, including hospitals, primary care settings, or public health clinics. Results Twenty-five articles fulfilled our eligibility criteria, including 22 primary publications and three companion reports. The majority of studies (15/22 primary publications) examined librarians providing instruction in literature searching to healthcare trainees, and measured literature searching proficiency. Other studies analyzed librarian-provided literature searching services and instruction in question formulation as well as the impact of librarian-provided services on patient length of stay in hospital. No studies were found that investigated librarians providing direct services to researchers or patients in healthcare settings. Conclusions Librarian-provided services directed to participants in training programs (eg, students, residents) improve skills in searching the literature to facilitate the integration of research evidence into clinical decision-making. Services provided to clinicians were shown to be effective in saving time for health professionals and providing relevant information for decision-making. Two studies indicated patient length of stay was reduced when clinicians requested literature searches related to a patient's case. PMID:24872341

  18. Description of CRIB, the GIPSY retrieval mechanism, and the interface to the General Electric MARK III Service : CRIB, the mineral resources data bank of the U.S. Geological Survey--guide for public users, 1977

    USGS Publications Warehouse

    Calkins, James Alfred; Keefer, Eleanor K.; Ofsharick, Regina A.; Mason, George T.; Tracy, Patricia; Atkins, Mary

    1978-01-01

    The U.S. Geological Survey's Computerized Resources Information Bank (CRIB) is being made available for public use through the computer facilities of the University of Oklahoma and the General Electric Company, U.S.A. The use of General Electric's worldwide information-services network provides access to the CRIB file to a worldwide clientele. This manual, which consists of two chapters, is intended as a guide to users who wish to interrogate the file. Chapter A contains a description of the CRIB file, information on the use of the GIPSY retrieval system, and a description of the General Electric MARK III Service. Chapter B contains a description of the individual data items in the CRIB record as well as code lists. CRIB consists of a set of variable-length records on the metallic and nonmetallic mineral resources of the United States and other countries. At present, 31,645 records in the master file are being made available. The record contains information on mineral deposits and mineral commodities. Some topics covered are: deposit name, location, commodity information, description of deposit, geology, production, reserves, potential resources, and references. The data are processed by the GIPSY program, which maintains the data file and builds, updates, searches, and prints the records using simple yet versatile command statements. Searching and selecting records is accomplished by specifying the presence, absence, or content of any element of information in the record; these specifications can be logically linked to prepare sophisticated search strategies. Output is available in the form of the complete record, a listing of selected parts of the record, or fixed-field tabulations. The General Electric MARK III Service is a computerized information services network operating internationally by land lines, satellites, and undersea cables. The service is available by local telephone to 500 cities in North America, Western Europe, Australia, Southeast Asia, Japan, and Saudi Arabia. An interface called the 'foreground driver' is used to link the GIPSY program to the General Electric system.

  19. End-user searching: impetus for an expanding information management and technology role for the hospital librarian.

    PubMed Central

    Klein, M S; Ross, F

    1997-01-01

    Using the results of the 1993 Medical Library Association (MLA) Hospital Libraries Section survey of hospital-based end-user search services, this article describes how end-user search services can become an impetus for an expanded information management and technology role for the hospital librarian. An end-user services implementation plan is presented that focuses on software, hardware, finances, policies, staff allocations and responsibilities, educational program design, and program evaluation. Possibilities for extending end-user search services into information technology and informatics, specialized end-user search systems, and Internet access are described. Future opportunities are identified for expanding the hospital librarian's role in the face of changing health care management, advances in information technology, and increasing end-user expectations. PMID:9285126

  20. Multi-fidelity and multi-disciplinary design optimization of supersonic business jets

    NASA Astrophysics Data System (ADS)

    Choi, Seongim

    Supersonic jets have been drawing great attention after the end of service for the Concorde was announced on April of 2003. It is believed, however, that civilian supersonic aircraft may make a viable return in the business jet market. This thesis focuses on the design optimization of feasible supersonic business jet configurations. Preliminary design techniques for mitigation of ground sonic boom are investigated while ensuring that all relevant disciplinary constraints are satisfied (including aerodynamic performance, propulsion, stability & control and structures.) In order to achieve reasonable confidence in the resulting designs, high-fidelity simulations are required, making the entire design process both expensive and complex. In order to minimize the computational cost, surrogate/approximate models are constructed using a hierarchy of different fidelity analysis tools including PASS, A502/Panair and Euler/NS codes. Direct search methods such as Genetic Algorithms (GAs) and a nonlinear SIMPLEX are employed to designs in searches of large and noisy design spaces. A local gradient-based search method can be combined with these global search methods for small modifications of candidate optimum designs. The Mesh Adaptive Direct Search (MADS) method can also be used to explore the design space using a solution-adaptive grid refinement approach. These hybrid approaches, both in search methodology and surrogate model construction, are shown to result in designs with reductions in sonic boom and improved aerodynamic performance.

  1. Beyond the online catalog: developing an academic information system in the sciences.

    PubMed Central

    Crawford, S; Halbrook, B; Kelly, E; Stucki, L

    1987-01-01

    The online public access catalog consists essentially of a machine-readable database with network capabilities. Like other computer-based information systems, it may be continuously enhanced by the addition of new capabilities and databases. It may also become a gateway to other information networks. This paper reports the evolution of the Bibliographic Access and Control System (BACS) of Washington University in end-user searching, current awareness services, information management, and administrative functions. Ongoing research and development and the future of the online catalog are also discussed. PMID:3315052

  2. Beyond the online catalog: developing an academic information system in the sciences.

    PubMed

    Crawford, S; Halbrook, B; Kelly, E; Stucki, L

    1987-07-01

    The online public access catalog consists essentially of a machine-readable database with network capabilities. Like other computer-based information systems, it may be continuously enhanced by the addition of new capabilities and databases. It may also become a gateway to other information networks. This paper reports the evolution of the Bibliographic Access and Control System (BACS) of Washington University in end-user searching, current awareness services, information management, and administrative functions. Ongoing research and development and the future of the online catalog are also discussed.

  3. 75 FR 21044 - Sunshine Act Meeting of the Board of Directors Search Committee for LSC President; Notice

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-22

    ... LEGAL SERVICES CORPORATION Sunshine Act Meeting of the Board of Directors Search Committee for LSC President; Notice Time and Date: The Presidential Search Committee of the Legal Services Corporation's Board.... 2. Consider and act on draft Request for Proposals for executive search firms. 3. Public Comment. 4...

  4. Design of transonic airfoil sections using a similarity theory

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.

  5. 12 CFR 4.17 - Fees for services.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... EXAMINERS Availability of Information Under the Freedom of Information Act § 4.17 Fees for services. (a... expenditures that the OCC incurs in providing services (including searching for, reviewing, and duplicating...(c). The OCC may contract with a commercial service to search for, duplicate, or disseminate records...

  6. 32 CFR Appendix A to Part 292 - Uniform Agency Fees for Search and Duplication Under the Freedom of Information Act (as Amended)

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... a. above) for the computer/operator/programmer determining how to conduct and subsequently executing the search will be recorded as part of the computer search. c. Actual time spent travelling to a...

  7. Optimal directed searches for continuous gravitational waves

    NASA Astrophysics Data System (ADS)

    Ming, Jing; Krishnan, Badri; Papa, Maria Alessandra; Aulbert, Carsten; Fehrmann, Henning

    2016-03-01

    Wide parameter space searches for long-lived continuous gravitational wave signals are computationally limited. It is therefore critically important that the available computational resources are used rationally. In this paper we consider directed searches, i.e., targets for which the sky position is known accurately but the frequency and spin-down parameters are completely unknown. Given a list of such potential astrophysical targets, we therefore need to prioritize. On which target(s) should we spend scarce computing resources? What parameter space region in frequency and spin-down should we search through? Finally, what is the optimal search setup that we should use? In this paper we present a general framework that allows us to solve all three of these problems. This framework is based on maximizing the probability of making a detection subject to a constraint on the maximum available computational cost. We illustrate the method for a simplified problem.

  8. The Perfectly Organized Search Service.

    ERIC Educational Resources Information Center

    Leach, Sandra Sinsel; Spencer, Mary Ellen

    1993-01-01

    Describes the evolution and operation of the successful Database Search Service (DSS) at the John C. Hodges Library, University of Tennessee, with detailed information about equipment, policies, software, training, and physical layout. Success is attributed to careful administration, standardization of search equipment and interfaces, staff…

  9. Caught on the Web

    ERIC Educational Resources Information Center

    Isakson, Carol

    2004-01-01

    Search engines rapidly add new services and experimental tools in trying to outmaneuver each other for customers. In this article, the author describes the latest additional services of some search engines and provides its sources. The author also suggests tips for using these new search upgrades.

  10. J-Plus Web Portal

    NASA Astrophysics Data System (ADS)

    Civera Lorenzo, Tamara

    2017-10-01

    Brief presentation about the J-PLUS EDR data access web portal (http://archive.cefca.es/catalogues/jplus-edr) where the different services available to retrieve images and catalogues data have been presented.J-PLUS Early Data Release (EDR) archive includes two types of data: images and dual and single catalogue data which include parameters measured from images. J-PLUS web portal offers catalogue data and images through several different online data access tools or services each suited to a particular need. The different services offered are: Coverage map Sky navigator Object visualization Image search Cone search Object list search Virtual observatory services: Simple Cone Search Simple Image Access Protocol Simple Spectral Access Protocol Table Access Protocol

  11. 49 CFR 7.43 - Fee schedule.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... The rates for manual searching, computer operator/programmer time and time spent reviewing records... requested under subpart C of this part, including making it available for inspection, will be determined by... search. (b) The standard fee for a computer search for a record requested under subpart C of this part is...

  12. 49 CFR 7.43 - Fee schedule.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... The rates for manual searching, computer operator/programmer time and time spent reviewing records... requested under subpart C of this part, including making it available for inspection, will be determined by... search. (b) The standard fee for a computer search for a record requested under subpart C of this part is...

  13. Effects of librarian-provided services in healthcare settings: a systematic review.

    PubMed

    Perrier, Laure; Farrell, Ann; Ayala, A Patricia; Lightfoot, David; Kenny, Tim; Aaronson, Ellen; Allee, Nancy; Brigham, Tara; Connor, Elizabeth; Constantinescu, Teodora; Muellenbach, Joanne; Epstein, Helen-Ann Brown; Weiss, Ardis

    2014-01-01

    To assess the effects of librarian-provided services in healthcare settings on patient, healthcare provider, and researcher outcomes. Medline, CINAHL, ERIC, LISA (Library and Information Science Abstracts), and the Cochrane Central Register of Controlled Trials were searched from inception to June 2013. Studies involving librarian-provided services for patients encountering the healthcare system, healthcare providers, or researchers were eligible for inclusion. All librarian-provided services in healthcare settings were considered as an intervention, including hospitals, primary care settings, or public health clinics. Twenty-five articles fulfilled our eligibility criteria, including 22 primary publications and three companion reports. The majority of studies (15/22 primary publications) examined librarians providing instruction in literature searching to healthcare trainees, and measured literature searching proficiency. Other studies analyzed librarian-provided literature searching services and instruction in question formulation as well as the impact of librarian-provided services on patient length of stay in hospital. No studies were found that investigated librarians providing direct services to researchers or patients in healthcare settings. Librarian-provided services directed to participants in training programs (eg, students, residents) improve skills in searching the literature to facilitate the integration of research evidence into clinical decision-making. Services provided to clinicians were shown to be effective in saving time for health professionals and providing relevant information for decision-making. Two studies indicated patient length of stay was reduced when clinicians requested literature searches related to a patient's case. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. Large-scale feature searches of collections of medical imagery

    NASA Astrophysics Data System (ADS)

    Hedgcock, Marcus W.; Karshat, Walter B.; Levitt, Tod S.; Vosky, D. N.

    1993-09-01

    Large scale feature searches of accumulated collections of medical imagery are required for multiple purposes, including clinical studies, administrative planning, epidemiology, teaching, quality improvement, and research. To perform a feature search of large collections of medical imagery, one can either search text descriptors of the imagery in the collection (usually the interpretation), or (if the imagery is in digital format) the imagery itself. At our institution, text interpretations of medical imagery are all available in our VA Hospital Information System. These are downloaded daily into an off-line computer. The text descriptors of most medical imagery are usually formatted as free text, and so require a user friendly database search tool to make searches quick and easy for any user to design and execute. We are tailoring such a database search tool (Liveview), developed by one of the authors (Karshat). To further facilitate search construction, we are constructing (from our accumulated interpretation data) a dictionary of medical and radiological terms and synonyms. If the imagery database is digital, the imagery which the search discovers is easily retrieved from the computer archive. We describe our database search user interface, with examples, and compare the efficacy of computer assisted imagery searches from a clinical text database with manual searches. Our initial work on direct feature searches of digital medical imagery is outlined.

  15. Computing environment logbook

    DOEpatents

    Osbourn, Gordon C; Bouchard, Ann M

    2012-09-18

    A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

  16. Reviewing and Critiquing Computer Learning and Usage among Older Adults

    ERIC Educational Resources Information Center

    Kim, Young Sek

    2008-01-01

    By searching the keywords of "older adult" and "computer" in ERIC, Academic Search Premier, and PsycINFO, this study reviewed 70 studies published after 1990 that address older adults' computer learning and usage. This study revealed 5 prominent themes among reviewed literature: (a) motivations and barriers of older adults' usage of computers, (b)…

  17. 75 FR 54388 - Sunshine Act Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-07

    ... LEGAL SERVICES CORPORATION Sunshine Act Meeting Time and Date: The Legal Services Corporation Board of Directors' Search Committee for LSC President (``Search Committee'' or ``Committee'') will meet... conclusion of the Committee's agenda. Location: Legal Services Corporation, 3333 K Street, NW., Washington...

  18. Home | www.charlescountymd.gov

    Science.gov Websites

    Customer Survey Mobile Services Contact Search form Search Search Main menu Home Businesses Tourism Animal Shelter Water and Sewer Billing Mobile Friendly Services Opioid Abuse & Overdose Prevention OFFICIAL WEBSITE OF THE CHARLES COUNTY GOVERNMENT 200 Baltimore Street | La Plata, Maryland 20646 Mobile

  19. Municipal solid waste transportation optimisation with vehicle routing approach: case study of Pontianak City, West Kalimantan

    NASA Astrophysics Data System (ADS)

    Kamal, M. A.; Youlla, D.

    2018-03-01

    Municipal solid waste (MSW) transportation in Pontianak City becomes an issue that need to be tackled by the relevant agencies. The MSW transportation service in Pontianak City currently requires very high resources especially in vehicle usage. Increasing the number of fleets has not been able to increase service levels while garbage volume is growing every year along with population growth. In this research, vehicle routing optimization approach was used to find optimal and efficient routes of vehicle cost in transporting garbage from several Temporary Garbage Dump (TGD) to Final Garbage Dump (FGD). One of the problems of MSW transportation is that there is a TGD which exceed the the vehicle capacity and must be visited more than once. The optimal computation results suggest that the municipal authorities only use 3 vehicles from 5 vehicles provided with the total minimum cost of IDR. 778,870. The computation time to search optimal route and minimal cost is very time consuming. This problem is influenced by the number of constraints and decision variables that have are integer value.

  20. [A survey of the best bibliographic searching system in occupational medicine and discussion of its implementation].

    PubMed

    Inoue, J

    1991-12-01

    When occupational health personnel, especially occupational physicians search bibliographies, they usually have to search bibliographies by themselves. Also, if a library is not available because of the location of their work place, they might have to rely on online databases. Although there are many commercial databases in the world, people who seldom use them, will have problems with on-line searching, such as user-computer interface, keywords, and so on. The present study surveyed the best bibliographic searching system in the field of occupational medicine by questionnaire through the use of DIALOG OnDisc MEDLINE as a commercial database. In order to ascertain the problems involved in determining the best bibliographic searching system, a prototype bibliographic searching system was constructed and then evaluated. Finally, solutions for the problems were discussed. These led to the following conclusions: to construct the best bibliographic searching system at the present time, 1) a concept of micro-to-mainframe links (MML) is needed for the computer hardware network; 2) multi-lingual font standards and an excellent common user-computer interface are needed for the computer software; 3) a short course and education of database management systems, and support of personal information processing for retrieved data are necessary for the practical use of the system.

  1. GeoSearch: A lightweight broking middleware for geospatial resources discovery

    NASA Astrophysics Data System (ADS)

    Gui, Z.; Yang, C.; Liu, K.; Xia, J.

    2012-12-01

    With petabytes of geodata, thousands of geospatial web services available over the Internet, it is critical to support geoscience research and applications by finding the best-fit geospatial resources from the massive and heterogeneous resources. Past decades' developments witnessed the operation of many service components to facilitate geospatial resource management and discovery. However, efficient and accurate geospatial resource discovery is still a big challenge due to the following reasons: 1)The entry barriers (also called "learning curves") hinder the usability of discovery services to end users. Different portals and catalogues always adopt various access protocols, metadata formats and GUI styles to organize, present and publish metadata. It is hard for end users to learn all these technical details and differences. 2)The cost for federating heterogeneous services is high. To provide sufficient resources and facilitate data discovery, many registries adopt periodic harvesting mechanism to retrieve metadata from other federated catalogues. These time-consuming processes lead to network and storage burdens, data redundancy, and also the overhead of maintaining data consistency. 3)The heterogeneous semantics issues in data discovery. Since the keyword matching is still the primary search method in many operational discovery services, the search accuracy (precision and recall) is hard to guarantee. Semantic technologies (such as semantic reasoning and similarity evaluation) offer a solution to solve these issues. However, integrating semantic technologies with existing service is challenging due to the expandability limitations on the service frameworks and metadata templates. 4)The capabilities to help users make final selection are inadequate. Most of the existing search portals lack intuitive and diverse information visualization methods and functions (sort, filter) to present, explore and analyze search results. Furthermore, the presentation of the value-added additional information (such as, service quality and user feedback), which conveys important decision supporting information, is missing. To address these issues, we prototyped a distributed search engine, GeoSearch, based on brokering middleware framework to search, integrate and visualize heterogeneous geospatial resources. Specifically, 1) A lightweight discover broker is developed to conduct distributed search. The broker retrieves metadata records for geospatial resources and additional information from dispersed services (portals and catalogues) and other systems on the fly. 2) A quality monitoring and evaluation broker (i.e., QoS Checker) is developed and integrated to provide quality information for geospatial web services. 3) The semantic assisted search and relevance evaluation functions are implemented by loosely interoperating with ESIP Testbed component. 4) Sophisticated information and data visualization functionalities and tools are assembled to improve user experience and assist resource selection.

  2. Implementing the Army NetCentric Data Strategy in a ServiceOriented Environment

    DTIC Science & Technology

    2009-04-23

    D a t a D i s c o v e r y Data Retrieval Data Subscription Data Discovery D a t a A c c e s s Artifact Discovery Federated Search Data Search Data...define common interfaces to search and  retrieve data across the enterprise.  • Patterns • Search • Status • Receive – Services • Federated   Search • Artifact

  3. Semantic Web Data Discovery of Earth Science Data at NASA Goddard Earth Sciences Data and Information Services Center (GES DISC)

    NASA Technical Reports Server (NTRS)

    Hegde, Mahabaleshwara; Strub, Richard F.; Lynnes, Christopher S.; Fang, Hongliang; Teng, William

    2008-01-01

    Mirador is a web interface for searching Earth Science data archived at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Mirador provides keyword-based search and guided navigation for providing efficient search and access to Earth Science data. Mirador employs the power of Google's universal search technology for fast metadata keyword searches, augmented by additional capabilities such as event searches (e.g., hurricanes), searches based on location gazetteer, and data services like format converters and data sub-setters. The objective of guided data navigation is to present users with multiple guided navigation in Mirador is an ontology based on the Global Change Master directory (GCMD) Directory Interchange Format (DIF). Current implementation includes the project ontology covering various instruments and model data. Additional capabilities in the pipeline include Earth Science parameter and applications ontologies.

  4. Omokage search: shape similarity search service for biomolecular structures in both the PDB and EMDB.

    PubMed

    Suzuki, Hirofumi; Kawabata, Takeshi; Nakamura, Haruki

    2016-02-15

    Omokage search is a service to search the global shape similarity of biological macromolecules and their assemblies, in both the Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB). The server compares global shapes of assemblies independent of sequence order and number of subunits. As a search query, the user inputs a structure ID (PDB ID or EMDB ID) or uploads an atomic model or 3D density map to the server. The search is performed usually within 1 min, using one-dimensional profiles (incremental distance rank profiles) to characterize the shapes. Using the gmfit (Gaussian mixture model fitting) program, the found structures are fitted onto the query structure and their superimposed structures are displayed on the Web browser. Our service provides new structural perspectives to life science researchers. Omokage search is freely accessible at http://pdbj.org/omokage/. © The Author 2015. Published by Oxford University Press.

  5. Efficient and Scalable Cross-Matching of (Very) Large Catalogs

    NASA Astrophysics Data System (ADS)

    Pineau, F.-X.; Boch, T.; Derriere, S.

    2011-07-01

    Whether it be for building multi-wavelength datasets from independent surveys, studying changes in objects luminosities, or detecting moving objects (stellar proper motions, asteroids), cross-catalog matching is a technique widely used in astronomy. The need for efficient, reliable and scalable cross-catalog matching is becoming even more pressing with forthcoming projects which will produce huge catalogs in which astronomers will dig for rare objects, perform statistical analysis and classification, or real-time transients detection. We have developed a formalism and the corresponding technical framework to address the challenge of fast cross-catalog matching. Our formalism supports more than simple nearest-neighbor search, and handles elliptical positional errors. Scalability is improved by partitioning the sky using the HEALPix scheme, and processing independently each sky cell. The use of multi-threaded two-dimensional kd-trees adapted to managing equatorial coordinates enables efficient neighbor search. The whole process can run on a single computer, but could also use clusters of machines to cross-match future very large surveys such as GAIA or LSST in reasonable times. We already achieve performances where the 2MASS (˜470M sources) and SDSS DR7 (˜350M sources) can be matched on a single machine in less than 10 minutes. We aim at providing astronomers with a catalog cross-matching service, available on-line and leveraging on the catalogs present in the VizieR database. This service will allow users both to access pre-computed cross-matches across some very large catalogs, and to run customized cross-matching operations. It will also support VO protocols for synchronous or asynchronous queries.

  6. Searching U.S. Patents: Core Collection and Suggestions for Service.

    ERIC Educational Resources Information Center

    Harwell, Kevin R.

    1993-01-01

    Provides fundamental information about patents, describes effective and affordable reference resources, and discusses specific issues in providing patent information services to inventors and other patrons. Basic resources, including CD-ROM products, patent classification and searching resources, and other search tools are described in an…

  7. Fast adaptive diamond search algorithm for block-matching motion estimation using spatial correlation

    NASA Astrophysics Data System (ADS)

    Park, Sang-Gon; Jeong, Dong-Seok

    2000-12-01

    In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.

  8. 24 CFR 576.105 - Housing relocation and stabilization services.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... services: (1) Housing search and placement. Services or activities necessary to assist program participants... housing; (iii) Housing search; (iv) Outreach to and negotiation with owners; (v) Assistance with... families applying for or receiving homelessness prevention or rapid re-housing assistance; (B) Conducting...

  9. 24 CFR 576.105 - Housing relocation and stabilization services.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... services: (1) Housing search and placement. Services or activities necessary to assist program participants... housing; (iii) Housing search; (iv) Outreach to and negotiation with owners; (v) Assistance with... families applying for or receiving homelessness prevention or rapid re-housing assistance; (B) Conducting...

  10. 24 CFR 576.105 - Housing relocation and stabilization services.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... services: (1) Housing search and placement. Services or activities necessary to assist program participants... housing; (iii) Housing search; (iv) Outreach to and negotiation with owners; (v) Assistance with... families applying for or receiving homelessness prevention or rapid re-housing assistance; (B) Conducting...

  11. Cloudy Solar Software - Enhanced Capabilities for Finding, Pre-processing, and Visualizing Solar Data

    NASA Astrophysics Data System (ADS)

    Istvan Etesi, Laszlo; Tolbert, K.; Schwartz, R.; Zarro, D.; Dennis, B.; Csillaghy, A.

    2010-05-01

    In our project "Extending the Virtual Solar Observatory (VSO)” we have combined some of the features available in Solar Software (SSW) to produce an integrated environment for data analysis, supporting the complete workflow from data location, retrieval, preparation, and analysis to creating publication-quality figures. Our goal is an integrated analysis experience in IDL, easy-to-use but flexible enough to allow more sophisticated procedures such as multi-instrument analysis. To that end, we have made the transition from a locally oriented setting where all the analysis is done on the user's computer, to an extended analysis environment where IDL has access to services available on the Internet. We have implemented a form of Cloud Computing that uses the VSO search and a new data retrieval and pre-processing server (PrepServer) that provides remote execution of instrument-specific data preparation. We have incorporated the interfaces to the VSO search and the PrepServer into an IDL widget (SHOW_SYNOP) that provides user-friendly searching and downloading of raw solar data and optionally sends search results for pre-processing to the PrepServer prior to downloading the data. The raw and pre-processed data can be displayed with our plotting suite, PLOTMAN, which can handle different data types (light curves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. PLOTMAN is highly configurable and suited for visual data analysis and for creating publishable figures. PLOTMAN and SHOW_SYNOP work hand-in-hand for a convenient working environment. Our environment supports a growing number of solar instruments that currently includes RHESSI, SOHO/EIT, TRACE, SECCHI/EUVI, HINODE/XRT, and HINODE/EIS.

  12. Semantic Search of Web Services

    ERIC Educational Resources Information Center

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  13. Oregon State University | Oregon State University

    Science.gov Websites

    Services About Academics Research Outreach Athletics OSU150 Current Students Online Students Future Students Faculty and Staff Parents and Family Open Menu Open Search search for people and pages Search OSU - and ours. More Research. Virtual Tour Tools and Services Audience Menu Future Students Current

  14. Fast parallel tandem mass spectral library searching using GPU hardware acceleration.

    PubMed

    Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K; Martin, Daniel B

    2011-06-03

    Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate-limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper, we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching), is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA, which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment.

  15. Efficient computational methods to study new and innovative signal detection techniques in SETI

    NASA Technical Reports Server (NTRS)

    Deans, Stanley R.

    1991-01-01

    The purpose of the research reported here is to provide a rapid computational method for computing various statistical parameters associated with overlapped Hann spectra. These results are important for the Targeted Search part of the Search for ExtraTerrestrial Intelligence (SETI) Microwave Observing Project.

  16. Practicing evidence based medicine at the bedside: a randomized controlled pilot study in undergraduate medical students assessing the practicality of tablets, smartphones, and computers in clinical life.

    PubMed

    Friederichs, Hendrik; Marschall, Bernhard; Weissenstein, Anne

    2014-12-05

    Practicing evidence-based medicine is an important aspect of providing good medical care. Accessing external information through literature searches on computer-based systems can effectively achieve integration in clinical care. We conducted a pilot study using smartphones, tablets, and stationary computers as search devices at the bedside. The objective was to determine possible differences between the various devices and assess students' internet use habits. In a randomized controlled pilot study, 120 students were divided in three groups. One control group solved clinical problems on a computer and two intervention groups used mobile devices at the bedside. In a questionnaire, students were asked to report their internet use habits as well as their satisfaction with their respective search tool using a 5-point Likert scale. Of 120 surveys, 94 (78.3%) complete data sets were analyzed. The mobility of the tablet (3.90) and the smartphone (4.39) was seen as a significant advantage over the computer (2.38, p < .001). However, for performing an effective literature search at the bedside, the computer (3.22) was rated superior to both tablet computers (2.13) and smartphones (1.68). No significant differences were detected between tablets and smartphones except satisfaction with screen size (tablet 4.10, smartphone 2.00, p < .001). Using a mobile device at the bedside to perform an extensive search is not suitable for students who prefer using computers. However, mobility is regarded as a substantial advantage, and therefore future applications might facilitate quick and simple searches at the bedside.

  17. Perspectives for Web Service Intermediaries: How Influence on Quality Makes the Difference

    NASA Astrophysics Data System (ADS)

    Scholten, Ulrich; Fischer, Robin; Zirpins, Christian

    In the service-oriented computing paradigm and the Web service architecture, the broker role is a key facilitator to leverage technical capabilities of loose coupling to achieve organizational capabilities of dynamic customer-provider-relationships. In practice, this role has quickly evolved into a variety of intermediary concepts that refine and extend the basic functionality of service brokerage with respect to various forms of added value like platform or market mechanisms. While this has initially led to a rich variety of Web service intermediaries, many of these are now going through a phase of stagnation or even decline in customer acceptance. In this paper we present a comparative study on insufficient service quality that is arguably one of the key reasons for this phenomenon. In search of a differentiation with respect to quality monitoring and management patterns, we categorize intermediaries into Infomediaries, e-Hubs, e-Markets and Integrators. A mapping of quality factors and control mechanisms to these categories depicts their respective strengths and weaknesses. The results show that Integrators have the highest overall performance, followed by e-Markets, e-Hubs and lastly Infomediaries. A comparative market survey confirms the conceptual findings.

  18. Engineering calculations for communications satellite systems planning

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Levis, C. A.; Mount-Campbell, C.; Gonsalvez, D. J.; Wang, C. W.; Yamamura, Y.

    1985-01-01

    Computer-based techniques for optimizing communications-satellite orbit and frequency assignments are discussed. A gradient-search code was tested against a BSS scenario derived from the RARC-83 data. Improvement was obtained, but each iteration requires about 50 minutes of IBM-3081 CPU time. Gradient-search experiments on a small FSS test problem, consisting of a single service area served by 8 satellites, showed quickest convergence when the satellites were all initially placed near the center of the available orbital arc with moderate spacing. A transformation technique is proposed for investigating the surface topography of the objective function used in the gradient-search method. A new synthesis approach is based on transforming single-entry interference constraints into corresponding constraints on satellite spacings. These constraints are used with linear objective functions to formulate the co-channel orbital assignment task as a linear-programming (LP) problem or mixed integer programming (MIP) problem. Globally optimal solutions are always found with the MIP problems, but not necessarily with the LP problems. The MIP solutions can be used to evaluate the quality of the LP solutions. The initial results are very encouraging.

  19. Netwar

    NASA Astrophysics Data System (ADS)

    Keen, Arthur A.

    2006-04-01

    This paper describes technology being developed at 21st Century Technologies to automate Computer Network Operations (CNO). CNO refers to DoD activities related to Attacking and Defending Computer Networks (CNA & CND). Next generation cyber threats are emerging in the form of powerful Internet services and tools that automate intelligence gathering, planning, testing, and surveillance. We will focus on "Search-Engine Hacks", queries that can retrieve lists of router/switch/server passwords, control panels, accessible cameras, software keys, VPN connection files, and vulnerable web applications. Examples include "Titan Rain" attacks against DoD facilities and the Santy worm, which identifies vulnerable sites by searching Google for URLs containing application-specific strings. This trend will result in increasingly sophisticated and automated intelligence-driven cyber attacks coordinated across multiple domains that are difficult to defeat or even understand with current technology. One traditional method of CNO relies on surveillance detection as an attack predictor. Unfortunately, surveillance detection is difficult because attackers can perform search engine-driven surveillance such as with Google Hacks, and avoid touching the target site. Therefore, attack observables represent only about 5% of the attacker's total attack time, and are inadequate to provide warning. In order to predict attacks and defend against them, CNO must also employ more sophisticated techniques and work to understand the attacker's Motives, Means and Opportunities (MMO). CNO must use automated reconnaissance tools, such as Google, to identify information vulnerabilities, and then utilize Internet tools to observe the intelligence gathering, planning, testing, and collaboration activities that represent 95% of the attacker's effort.

  20. With Free Google Alert Services

    ERIC Educational Resources Information Center

    Gunn, Holly

    2005-01-01

    Alert services are a great way of keeping abreast of topics that interest you. Rather than searching the Web regularly to find new content about your areas of interest, an alert service keeps you informed by sending you notices when new material is added to the Web that matches your registered search criteria. Alert services are examples of push…

  1. The GEOSS Clearinghouse based on the GeoNetwork opensource

    NASA Astrophysics Data System (ADS)

    Liu, K.; Yang, C.; Wu, H.; Huang, Q.

    2010-12-01

    The Global Earth Observation System of Systems (GEOSS) is established to support the study of the Earth system in a global community. It provides services for social management, quick response, academic research, and education. The purpose of GEOSS is to achieve comprehensive, coordinated and sustained observations of the Earth system, improve monitoring of the state of the Earth, increase understanding of Earth processes, and enhance prediction of the behavior of the Earth system. In 2009, GEO called for a competition for an official GEOSS clearinghouse to be selected as a source to consolidating catalogs for Earth observations. The Joint Center for Intelligent Spatial Computing at George Mason University worked with USGS to submit a solution based on the open-source platform - GeoNetwork. In the spring of 2010, the solution is selected as the product for GEOSS clearinghouse. The GEOSS Clearinghouse is a common search facility for the Intergovernmental Group on Ea rth Observation (GEO). By providing a list of harvesting functions in Business Logic, GEOSS clearinghouse can collect metadata from distributed catalogs including other GeoNetwork native nodes, webDAV/sitemap/WAF, catalog services for the web (CSW)2.0, GEOSS Component and Service Registry (http://geossregistries.info/), OGC Web Services (WCS, WFS, WMS and WPS), OAI Protocol for Metadata Harvesting 2.0, ArcSDE Server and Local File System. Metadata in GEOSS clearinghouse are managed in a database (MySQL, Postgresql, Oracle, or MckoiDB) and an index of the metadata is maintained through Lucene engine. Thus, EO data, services, and related resources can be discovered and accessed. It supports a variety of geospatial standards including CSW and SRU for search, FGDC and ISO metadata, and WMS related OGC standards for data access and visualization, as linked from the metadata.

  2. Blast2GO goes grid: developing a grid-enabled prototype for functional genomics analysis.

    PubMed

    Aparicio, G; Götz, S; Conesa, A; Segrelles, D; Blanquer, I; García, J M; Hernandez, V; Robles, M; Talon, M

    2006-01-01

    The vast amount in complexity of data generated in Genomic Research implies that new dedicated and powerful computational tools need to be developed to meet their analysis requirements. Blast2GO (B2G) is a bioinformatics tool for Gene Ontology-based DNA or protein sequence annotation and function-based data mining. The application has been developed with the aim of affering an easy-to-use tool for functional genomics research. Typical B2G users are middle size genomics labs carrying out sequencing, ETS and microarray projects, handling datasets up to several thousand sequences. In the current version of B2G. The power and analytical potential of both annotation and function data-mining is somehow restricted to the computational power behind each particular installation. In order to be able to offer the possibility of an enhanced computational capacity within this bioinformatics application, a Grid component is being developed. A prototype has been conceived for the particular problem of speeding up the Blast searches to obtain fast results for large datasets. Many efforts have been done in the literature concerning the speeding up of Blast searches, but few of them deal with the use of large heterogeneous production Grid Infrastructures. These are the infrastructures that could reach the largest number of resources and the best load balancing for data access. The Grid Service under development will analyse requests based on the number of sequences, splitting them accordingly to the available resources. Lower-level computation will be performed through MPIBLAST. The software architecture is based on the WSRF standard.

  3. 20 CFR 617.20 - Responsibilities for the delivery of reemployment services.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... services; (6) Providing or procuring self-directed job search training, when necessary; (7) Providing training, job search and relocation assistance; (8) Developing a training plan with the individual; (9... reemployment services. 617.20 Section 617.20 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION...

  4. Parallel computing of a digital hologram and particle searching for microdigital-holographic particle-tracking velocimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Satake, Shin-ichi; Kanamori, Hiroyuki; Kunugi, Tomoaki

    2007-02-01

    We have developed a parallel algorithm for microdigital-holographic particle-tracking velocimetry. The algorithm is used in (1) numerical reconstruction of a particle image computer using a digital hologram, and (2) searching for particles. The numerical reconstruction from the digital hologram makes use of the Fresnel diffraction equation and the FFT (fast Fourier transform),whereas the particle search algorithm looks for local maximum graduation in a reconstruction field represented by a 3D matrix. To achieve high performance computing for both calculations (reconstruction and particle search), two memory partitions are allocated to the 3D matrix. In this matrix, the reconstruction part consists of horizontallymore » placed 2D memory partitions on the x-y plane for the FFT, whereas, the particle search part consists of vertically placed 2D memory partitions set along the z axes.Consequently, the scalability can be obtained for the proportion of processor elements,where the benchmarks are carried out for parallel computation by a SGI Altix machine.« less

  5. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    NASA Astrophysics Data System (ADS)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  6. A Study of Practical Proxy Reencryption with a Keyword Search Scheme considering Cloud Storage Structure

    PubMed Central

    Lee, Im-Yeong

    2014-01-01

    Data outsourcing services have emerged with the increasing use of digital information. They can be used to store data from various devices via networks that are easy to access. Unlike existing removable storage systems, storage outsourcing is available to many users because it has no storage limit and does not require a local storage medium. However, the reliability of storage outsourcing has become an important topic because many users employ it to store large volumes of data. To protect against unethical administrators and attackers, a variety of cryptography systems are used, such as searchable encryption and proxy reencryption. However, existing searchable encryption technology is inconvenient for use in storage outsourcing environments where users upload their data to be shared with others as necessary. In addition, some existing schemes are vulnerable to collusion attacks and have computing cost inefficiencies. In this paper, we analyze existing proxy re-encryption with keyword search. PMID:24693240

  7. A study of practical proxy reencryption with a keyword search scheme considering cloud storage structure.

    PubMed

    Lee, Sun-Ho; Lee, Im-Yeong

    2014-01-01

    Data outsourcing services have emerged with the increasing use of digital information. They can be used to store data from various devices via networks that are easy to access. Unlike existing removable storage systems, storage outsourcing is available to many users because it has no storage limit and does not require a local storage medium. However, the reliability of storage outsourcing has become an important topic because many users employ it to store large volumes of data. To protect against unethical administrators and attackers, a variety of cryptography systems are used, such as searchable encryption and proxy reencryption. However, existing searchable encryption technology is inconvenient for use in storage outsourcing environments where users upload their data to be shared with others as necessary. In addition, some existing schemes are vulnerable to collusion attacks and have computing cost inefficiencies. In this paper, we analyze existing proxy re-encryption with keyword search.

  8. Relatedness-based Multi-Entity Summarization

    PubMed Central

    Gunaratna, Kalpa; Yazdavar, Amir Hossein; Thirunarayan, Krishnaprasad; Sheth, Amit; Cheng, Gong

    2017-01-01

    Representing world knowledge in a machine processable format is important as entities and their descriptions have fueled tremendous growth in knowledge-rich information processing platforms, services, and systems. Prominent applications of knowledge graphs include search engines (e.g., Google Search and Microsoft Bing), email clients (e.g., Gmail), and intelligent personal assistants (e.g., Google Now, Amazon Echo, and Apple’s Siri). In this paper, we present an approach that can summarize facts about a collection of entities by analyzing their relatedness in preference to summarizing each entity in isolation. Specifically, we generate informative entity summaries by selecting: (i) inter-entity facts that are similar and (ii) intra-entity facts that are important and diverse. We employ a constrained knapsack problem solving approach to efficiently compute entity summaries. We perform both qualitative and quantitative experiments and demonstrate that our approach yields promising results compared to two other stand-alone state-of-the-art entity summarization approaches. PMID:29051696

  9. Proposal for a telehealth concept in the translational research model.

    PubMed

    Silva, Angélica Baptista; Morel, Carlos Médicis; Moraes, Ilara Hämmerli Sozzi de

    2014-04-01

    To review the conceptual relationship between telehealth and translational research. Bibliographical search on telehealth was conducted in the Scopus, Cochrane BVS, LILACS and MEDLINE databases to find experiences of telehealth in conjunction with discussion of translational research in health. The search retrieved eight studies based on analysis of models of the five stages of translational research and the multiple strands of public health policy in the context of telehealth in Brazil. The models were applied to telehealth activities concerning the Network of Human Milk Banks, in the Telemedicine University Network. The translational research cycle of human milk collected, stored and distributed presents several integrated telehealth initiatives, such as video conferencing, and software and portals for synthesizing knowledge, composing elements of an information ecosystem, mediated by information and communication technologies in the health system. Telehealth should be composed of a set of activities in a computer mediated network promoting the translation of knowledge between research and health services.

  10. 41 CFR 105-60.305-5 - Searches.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... System (Continued) GENERAL SERVICES ADMINISTRATION Regional Offices-General Services Administration 60..., Policies, Interpretations, Manuals, and Instructions § 105-60.305-5 Searches. (a) GSA may charge for the...

  11. 41 CFR 105-60.305-5 - Searches.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... System (Continued) GENERAL SERVICES ADMINISTRATION Regional Offices-General Services Administration 60..., Policies, Interpretations, Manuals, and Instructions § 105-60.305-5 Searches. (a) GSA may charge for the...

  12. Catalog Federation and Interoperability for Geoinformatics

    NASA Astrophysics Data System (ADS)

    Memon, A.; Lin, K.; Baru, C.

    2008-12-01

    With the increasing proliferation of online resources in the geosciences, including data, tools, and software services, there is also a proliferation of catalogs containing metadata that describe these resources. To realize the vision articulated in the NSF Workshop on Building a National Geoinformatics System, March 2007-where a user can sit at a terminal and easily search, discover, integrate and use distributed geoscience resources-it will be essential that a search request be able to traverse these multiple metadata catalogs. In this paper, we describe our effort at prototyping catalog interoperability across multiple metadata catalogs. An example of a metadata catalog is the one employed in the GEON Project (www.geongrid.org). The central GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, therefore, can be imbedded in any software application. There has been a requirement from some of the GEON collaborators (for example, at the University of Hyderabad, India and the Navajo Technical College, New Mexico) to deploy their own catalogs, to store information about their resources locally, while they publish some of this information for broader access and use. Thus, a search must now be able to span multiple, independent GEON catalogs. Next, some of our collaborators-e.g. GEO Grid (Global Earth Observations Grid) in Japan-are implementing the Catalog Services for the Web (CS-W) standard for their catalog, thereby requiring the search to span across catalogs implemented using the CS-W standard as well. Finally, we have recently deployed a search service to access all EarthScope data products, which are distributed across organizations in Seattle, WA (IRIS), Boulder, CO (UNAVCO), and Potsdam, Germany (ICDP/GFZ). This service essentially implements a virtual catalog (the actual catalogs and data are stored at the remote locations). So, there is the need to incorporate such 3rd party searches within a broader search function, such as GEONsearch in the GEON Portal. We will discuss technical issues involved in designing and deploying such a multi-catalog search service in GEON.

  13. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  14. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    PubMed

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  15. A Meta-Data Driven Approach to Searching for Educational Resources in a Global Context.

    ERIC Educational Resources Information Center

    Wade, Vincent P.; Doherty, Paul

    This paper presents the design of an Internet-enabled search service that supports educational resource discovery within an educational brokerage service. More specifically, it presents the design and implementation of a metadata-driven approach to implementing the distributed search and retrieval of Internet-based educational resources and…

  16. Pre-Service Teachers' Use of Library Databases: Some Insights

    ERIC Educational Resources Information Center

    Lamb, Janeen; Howard, Sarah; Easey, Michael

    2014-01-01

    The aim of this study is to investigate if providing mathematics education pre-service teachers with animated library tutorials on library and database searches changes their searching practices. This study involved the completion of a survey by 138 students and seven individual interviews before and after library search demonstration videos were…

  17. A framework with Cucho algorithm for discovering regular plans in mobile clients

    NASA Astrophysics Data System (ADS)

    Tsiligaridis, John

    2017-09-01

    In a mobile computing system, broadcasting has become a very interesting and challenging research issue. The server continuously broadcasts data to mobile users; the data can be inserted into customized size relations and broadcasted as Regular Broadcast Plan (RBP) with multiple channels. Two algorithms, given the data size for each provided service, the Basic Regular (BRA) and the Partition Value Algorithm (PVA) can provide a static and dynamic RBP construction with multiple constraints solutions respectively. Servers have to define the data size of the services and can provide a feasible RBP working with many broadcasting plan operations. The operations become more complicated when there are many kinds of services and the sizes of data sets are unknown to the server. To that end a framework has been developed that also gives the ability to select low or high capacity channels for servicing. Theorems with new analytical results can provide direct conditions that can state the existence of solutions for the RBP problem with the compound criterion. Two kinds of solutions are provided: the equal and the non equal subrelation solutions. The Cucho Search Algorithm (CS) with the Levy flight behavior has been selected for the optimization. The CS for RBP (CSRP) is developed applying the theorems to the discovery of RBPs. An additional change to CS has been made in order to increase the local search. The CS can also discover RBPs with the minimum number of channels. From all the above modern servers can be upgraded with these possibilities in regards to RBPs discovery with fewer channels.

  18. Creating a FIESTA (Framework for Integrated Earth Science and Technology Applications) with MagIC

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.

    2017-12-01

    The Magnetics Information Consortium (https://earthref.org/MagIC) has recently developed a containerized web application to considerably reduce the friction in contributing, exploring and combining valuable and complex datasets for the paleo-, geo- and rock magnetic scientific community. The data produced in this scientific domain are inherently hierarchical and the communities evolving approaches to this scientific workflow, from sampling to taking measurements to multiple levels of interpretations, require a large and flexible data model to adequately annotate the results and ensure reproducibility. Historically, contributing such detail in a consistent format has been prohibitively time consuming and often resulted in only publishing the highly derived interpretations. The new open-source (https://github.com/earthref/MagIC) application provides a flexible upload tool integrated with the data model to easily create a validated contribution and a powerful search interface for discovering datasets and combining them to enable transformative science. MagIC is hosted at EarthRef.org along with several interdisciplinary geoscience databases. A FIESTA (Framework for Integrated Earth Science and Technology Applications) is being created by generalizing MagIC's web application for reuse in other domains. The application relies on a single configuration document that describes the routing, data model, component settings and external services integrations. The container hosts an isomorphic Meteor JavaScript application, MongoDB database and ElasticSearch search engine. Multiple containers can be configured as microservices to serve portions of the application or rely on externally hosted MongoDB, ElasticSearch, or third-party services to efficiently scale computational demands. FIESTA is particularly well suited for many Earth Science disciplines with its flexible data model, mapping, account management, upload tool to private workspaces, reference metadata, image galleries, full text searches and detailed filters. EarthRef's Seamount Catalog of bathymetry and morphology data, EarthRef's Geochemical Earth Reference Model (GERM) databases, and Oregon State University's Marine and Geology Repository (http://osu-mgr.org) will benefit from custom adaptations of FIESTA.

  19. JPL's On-Line Solar System Data Service

    NASA Astrophysics Data System (ADS)

    Giorgini, J. D.; Yeomans, D. K.; Chamberlin, A. B.; Chodas, P. W.; Jacobson, R. A.; Keesey, M. S.; Lieske, J. H.; Ostro, S. J.; Standish, E. M.; Wimberly, R. N.

    1996-09-01

    Numerous data products from the JPL ephemeris team are being made available via an interactive telnet computer service and separate web page. For over 15,000 comets and asteroids, 60 natural satellites, and 9 planets, users with an Internet connection can easily create and download information 24 hours a day, 7 days a week. These data include customized, high precision ephemerides, orbital and physical characteristics, and search-lists of comets and asteroids that match combinations of up to 39 different parameters. For each body, the user can request computation of more than 70 orbital and physical quantities. Ephemerides output can be generated in ICRF/J2000.0 and FK4/1950.0 reference frames with TDB, TT, or UTC timescales, as appropriate, at user specified intervals. Computed tables are derived from the same ephemerides used at JPL for radar astronomy and spacecraft navigation. The dynamics and computed observables include relativistic effects. Available ephemeris time spans currently range from A.D. 1599-2200 for the planets to a few decades for the satellites, comets and asteroids. Information on the interference from sunlight and moonlight is available. As an example of a few of the features available, we note that a user could easily generate information on satellite and planetary magnitudes, illuminated fractions, and the planetographic longitudes and latitudes of their centers and sub-solar points as seen from a particular observatory location on Earth. Satellite transits, occultations and eclipses are available as well. The resulting ASCII tables can be transferred to the user's host computer via e-mail, ftp, or kermit protocols. For those who have WWW access, the telnet solar system ephemeris service will be one feature of the JPL solar system web page. This page will provide up-to-date physical and orbital characteristics as well as current and predicted observing opportunities for all solar system bodies. Close Earth approaches and radar observations will be provided for comets and asteroids.

  20. 12 CFR 1070.22 - Fees for processing requests for CFPB records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CFPB shall charge the requester for the actual direct cost of the search, including computer search time, runs, and the operator's salary. The fee for computer output will be the actual direct cost. For... and the cost of operating the computer to process a request) equals the equivalent dollar amount of...

  1. GloVis

    USGS Publications Warehouse

    Houska, Treva R.; Johnson, A.P.

    2012-01-01

    The Global Visualization Viewer (GloVis) trifold provides basic information for online access to a subset of satellite and aerial photography collections from the U.S. Geological Survey Earth Resources Observation and Science (EROS) Center archive. The GloVis (http://glovis.usgs.gov/) browser-based utility allows users to search and download National Aerial Photography Program (NAPP), National High Altitude Photography (NHAP), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Earth Observing-1 (EO-1), Global Land Survey, Moderate Resolution Imaging Spectroradiometer (MODIS), and TerraLook data. Minimum computer system requirements and customer service contact information also are included in the brochure.

  2. A Dynamical System Approach to the Surface Search of Debris from MH370

    NASA Astrophysics Data System (ADS)

    Mancho, Ana M.; Garcia-Garrido, V. J.; Wiggins, S.; Mendoza, C.

    2015-11-01

    The disappearance of Malaysia Airlines flight MH370 on the morning of the 8th of March 2014 is one of the great mysteries of our time. One relevant aspect of this mystery is that not a single piece of debris from the aircraft was found during the intensive surface search carried out in the months following the crash. Difficulties in the search efforts, due to the uncertainty in the plane's final impact point and the time passed since the accident, brought the question on how the debris was scattered in an always moving ocean, for which there were multiple datasets that do not uniquely determined its state. Our approach to this problem is based on dynamical systems tools that identify dynamic barriers and coherent structures governing transport. By combining different ocean data with these mathematical techniques, we are able to assess the spatio-temporal state of the ocean in the priority search area at the time of impact and the following weeks. Using this information we propose a revised search strategy by showing why one might not have expected to find debris in some large search areas targeted by the search services and determining regions where one might have expected impact debris to be located and that have not been subjected to any exploration. This research has been supported by MINECO under grants MTM2014-56392-R and ICMAT Severo Ochoa project SEV-2011-0087 and ONR grant No. N00014- 01-1-0769. Computational support from CESGA is acknowledged.

  3. Aggregating Queries Against Large Inventories of Remotely Accessible Data

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Fulker, D. W.

    2016-12-01

    Those seeking to discover data for a specific purpose often encounter search results that are so large as to be useless without computing assistance. This situation arises, with increasing frequency, in part because repositories contain ever greater numbers of granules, and their granularities may well be poorly aligned or even orthogonal to the data-selection needs of the user. This presentation describes a recently developed service for simultaneously querying large lists of OPeNDAP-accessible granules to extract specified data. The specifications include a richly expressive set of data-selection criteria—applicable to content as well as metadata—and the service has been tested successfully against lists naming hundreds of thousands of granules. Querying such numbers of local files (i.e., granules) on a desktop or laptop computer is practical (by using a scripting language, e.g.), but this practicality is diminished when the data are remote and thus best accessed through a Web-services interface. In these cases, which are increasingly common, scripted queries can take many hours because of inherent network latencies. Furthermore, communication dropouts can add fragility to such scripts, yielding gaps in the acquired results. In contrast, OPeNDAP's new aggregated-query services enable data discovery in the context of very large inventory sizes. These capabilities have been developed for use with OPeNDAP's Hyrax server, which is an open-source realization of DAP (for "Data Access Protocol," a specification widely used in NASA, NOAA and other data-intensive contexts). These aggregated-query services exhibit good response times (on the order of seconds, not hours) even for inventories that list hundreds of thousands of source granules.

  4. Providing Database Services in a Nationwide Research Organisation--Coexistence of Traditional Information Services and a Modern CD-ROM/Online Hybrid Solution.

    ERIC Educational Resources Information Center

    Bowman, Benjamin F.

    For the past two decades the central Information Retrieval Services of the Max Planck Society has been providing database searches for scientists in Max Planck Institutes and Research Groups throughout Germany. As a supplement to traditional search services offered by professional intermediaries, they have recently fostered the introduction of a…

  5. Particle Engineering Research Center at the University of Florida

    Science.gov Websites

    Sensors (CNBS) PERC Setup * Login to PERC Computers Form * PERC Key Form Search Site Search Source Search SiteSearch Textbox Search Site Search Search Frequently Used Pages News | Events | Directory | MyUFL | Campus

  6. Application of tabu search to deterministic and stochastic optimization problems

    NASA Astrophysics Data System (ADS)

    Gurtuna, Ozgur

    During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is developed. The theoretical underpinnings of the TSMC method and the flow of the algorithm are explained. Its performance is compared to other existing methods for financial option valuation. In the third, and final, problem, TSMC method is used to determine the conditions of feasibility for hybrid electric vehicles and fuel cell vehicles. There are many uncertainties related to the technologies and markets associated with new generation passenger vehicles. These uncertainties are analyzed in order to determine the conditions in which new generation vehicles can compete with established technologies.

  7. Use of Open Standards and Technologies at the Lunar Mapping and Modeling Project

    NASA Astrophysics Data System (ADS)

    Law, E.; Malhotra, S.; Bui, B.; Chang, G.; Goodale, C. E.; Ramirez, P.; Kim, R. M.; Sadaqathulla, S.; Rodriguez, L.

    2011-12-01

    The Lunar Mapping and Modeling Project (LMMP), led by the Marshall Space Flight center (MSFC), is tasked by NASA. The project is responsible for the development of an information system to support lunar exploration activities. It provides lunar explorers a set of tools and lunar map and model products that are predominantly derived from present lunar missions (e.g., the Lunar Reconnaissance Orbiter (LRO)) and from historical missions (e.g., Apollo). At Jet Propulsion Laboratory (JPL), we have built the LMMP interoperable geospatial information system's underlying infrastructure and a single point of entry - the LMMP Portal by employing a number of open standards and technologies. The Portal exposes a set of services to users to allow search, visualization, subset, and download of lunar data managed by the system. Users also have access to a set of tools that visualize, analyze and annotate the data. The infrastructure and Portal are based on web service oriented architecture. We designed the system to support solar system bodies in general including asteroids, earth and planets. We employed a combination of custom software, commercial and open-source components, off-the-shelf hardware and pay-by-use cloud computing services. The use of open standards and web service interfaces facilitate platform and application independent access to the services and data, offering for instances, iPad and Android mobile applications and large screen multi-touch with 3-D terrain viewing functions, for a rich browsing and analysis experience from a variety of platforms. The web services made use of open standards including: Representational State Transfer (REST); and Open Geospatial Consortium (OGC)'s Web Map Service (WMS), Web Coverage Service (WCS), Web Feature Service (WFS). Its data management services have been built on top of a set of open technologies including: Object Oriented Data Technology (OODT) - open source data catalog, archive, file management, data grid framework; openSSO - open source access management and federation platform; solr - open source enterprise search platform; redmine - open source project collaboration and management framework; GDAL - open source geospatial data abstraction library; and others. Its data products are compliant with Federal Geographic Data Committee (FGDC) metadata standard. This standardization allows users to access the data products via custom written applications or off-the-shelf applications such as GoogleEarth. We will demonstrate this ready-to-use system for data discovery and visualization by walking through the data services provided through the portal such as browse, search, and other tools. We will further demonstrate image viewing and layering of lunar map images from the Internet, via mobile devices such as Apple's iPad.

  8. Command Home Page

    Science.gov Websites

    Home Naval Special Warfare Home Subscribe to Navy News Service Search Navy.mil Advanced Search Home coordinator, explains details of the Montgomery G.I. Bill for active-duty service members to Naval Special fees, yearly books and supplies, and a monthly housing allowance to qualified service members. U.S

  9. 7 CFR 1962.40 - Liquidation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SERVICE, RURAL UTILITIES SERVICE, AND FARM SERVICE AGENCY, DEPARTMENT OF AGRICULTURE (CONTINUED) PROGRAM... searches should be obtained from the same source as is used when making a loan. If obtaining the searches... send these forms to the borrower as soon as a decision is made to liquidate. The procedures set out in...

  10. In Search of Gender Free Paradigms for Computer Science Education. [Proceedings of a Preconference Research Workshop at the National Educational Computing Conference (Nashville, Tennessee, June 24, 1990).

    ERIC Educational Resources Information Center

    Martin, C. Dianne, Ed.; Murchie-Beyma, Eric, Ed.

    This monograph includes nine papers delivered at a National Educational Computing Conference (NECC) preconference workshop, and a previously unpublished paper on gender and attitudes. The papers, which are presented in four categories, are: (1) "Report on the Workshop: In Search of Gender Free Paradigms for Computer Science Education"…

  11. Parallelization of combinatorial search when solving knapsack optimization problem on computing systems based on multicore processors

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the model of the knapsack optimization problem and method of its solving based on directed combinatorial search in the boolean space. The offered by the author specialized mathematical model of decomposition of the search-zone to the separate search-spheres and the algorithm of distribution of the search-spheres to the different cores of the multi-core processor are also discussed. The paper also provides an example of decomposition of the search-zone to the several search-spheres and distribution of the search-spheres to the different cores of the quad-core processor. Finally, an offered by the author formula for estimation of the theoretical maximum of the computational acceleration, which can be achieved due to the parallelization of the search-zone to the search-spheres on the unlimited number of the processor cores, is also given.

  12. Data Discovery with IBM Watson

    NASA Astrophysics Data System (ADS)

    Fessler, J.

    2016-12-01

    BM Watson is a cognitive computing system that uses machine learning, statistical analysis, and natural language processing to find and understand the clues in questions posed to it. Watson was made famous when it bested two champions on TV's Jeopardy! show. Since then, Watson has evolved into a platform of cognitive services that can be trained on very granular fields up study. Watson is being used to support a number of subject domains, such as cancer research, public safety, engineering, and the intelligence community. IBM will be providing a presentation and demonstration on the Watson technology and will discuss its capabilities including Natural Language Processing, text analytics and enterprise search, as well as cognitive computing with deep Q&A. The team will also be giving examples of how IBM Watson technology is being used to support real-world problems across a number of public sector agencies

  13. Fast parallel tandem mass spectral library searching using GPU hardware acceleration

    PubMed Central

    Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K.; Martin, Daniel B.

    2011-01-01

    Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching) is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment. PMID:21545112

  14. Evaluation of a Mobile Phone App for Providing Adolescents With Sexual and Reproductive Health Information, New York City, 2013-2016.

    PubMed

    Steinberg, Allyna; Griffin-Tomas, Marybec; Abu-Odeh, Desiree; Whitten, Alzen

    The New York City (NYC) Department of Health and Mental Hygiene released the Teens in NYC mobile phone application (app) in 2013 as part of a program to promote sexual and reproductive health among adolescents aged 12-19 in NYC. The app featured a locator that allowed users to search for health service providers by sexual health services, contraceptive methods, and geographic locations. We analyzed data on searches from the Where to Go section of the app to understand the patterns of use of the app's search functionality. From January 7, 2013, through March 20, 2016, the app was downloaded more than 20 000 times, and more than 25 000 unique searches were conducted within the app. Results suggest that the app helped adolescents discover and access a wide range of sexual health services, including less commonly used contraceptives. Those designing similar apps should consider incorporating search functionality by sexual health service (including abortion), contraceptive method, and user location.

  15. Ocean Drilling Program: Mirror Sites

    Science.gov Websites

    Publication services and products Drilling services and tools Online Janus database Search the ODP/TAMU web information, see www.iodp-usio.org. ODP | Search | Database | Drilling | Publications | Science | Cruise Info

  16. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information

    PubMed Central

    2013-01-01

    Background Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. Results We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user’s query, advanced data searching based on the specified user’s query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. Conclusions search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/. PMID:23452691

  17. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information.

    PubMed

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur

    2013-03-01

    Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user's query, advanced data searching based on the specified user's query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/.

  18. E-print Network Alert Service

    Science.gov Websites

    Website Policies and Important Links E-print Web Log alert image About Search Browse by Discipline search profile you submit to us. You can even receive new postings from a number of sites by submitting a profiles as you wish. Simply register for the Service and create your search strategies for your profiles

  19. Robotic disaster recovery efforts with ad-hoc deployable cloud computing

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Marsh, Ronald; Mohammad, Atif F.

    2013-06-01

    Autonomous operations of search and rescue (SaR) robots is an ill posed problem, which is complexified by the dynamic disaster recovery environment. In a typical SaR response scenario, responder robots will require different levels of processing capabilities during various parts of the response effort and will need to utilize multiple algorithms. Placing these capabilities onboard the robot is a mediocre solution that precludes algorithm specific performance optimization and results in mediocre performance. Architecture for an ad-hoc, deployable cloud environment suitable for use in a disaster response scenario is presented. Under this model, each service provider is optimized for the task and maintains a database of situation-relevant information. This service-oriented architecture (SOA 3.0) compliant framework also serves as an example of the efficient use of SOA 3.0 in an actual cloud application.

  20. The multimedia computer for office-based patient education: a systematic review.

    PubMed

    Wofford, James L; Smith, Edward D; Miller, David P

    2005-11-01

    Use of the multimedia computer for education is widespread in schools and businesses, and yet computer-assisted patient education is rare. In order to explore the potential use of computer-assisted patient education in the office setting, we performed a systematic review of randomized controlled trials (search date April 2004 using MEDLINE and Cochrane databases). Of the 26 trials identified, outcome measures included clinical indicators (12/26, 46.1%), knowledge retention (12/26, 46.1%), health attitudes (15/26, 57.7%), level of shared decision-making (5/26, 19.2%), health services utilization (4/26, 17.6%), and costs (5/26, 19.2%), respectively. Four trials targeted patients with breast cancer, but the clinical issues were otherwise diverse. Reporting of the testing of randomization (76.9%) and appropriate analysis of main effect variables (70.6%) were more common than reporting of a reliable randomization process (35.3%), blinding of outcomes assessment (17.6%), or sample size definition (29.4%). We concluded that the potential for improving the efficiency of the office through computer-assisted patient education has been demonstrated, but better proof of the impact on clinical outcomes is warranted before this strategy is accepted in the office setting.

  1. 1999 NCCS Highlights

    NASA Technical Reports Server (NTRS)

    Bennett, Jerome (Technical Monitor)

    2002-01-01

    The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.

  2. 34 CFR 5.60 - Schedule of fees.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... rate of pay per hour of the employee conducting the search plus 16 percent of that rate. (iii) Computer... operator plus 16% of that rate plus $287 per hour for computer operation. Two hours of search time on a... of the computer operator's basic rate of pay per hour plus 16 percent of that rate. (2) Review of...

  3. A Different Web-Based Geocoding Service Using Fuzzy Techniques

    NASA Astrophysics Data System (ADS)

    Pahlavani, P.; Abbaspour, R. A.; Zare Zadiny, A.

    2015-12-01

    Geocoding - the process of finding position based on descriptive data such as address or postal code - is considered as one of the most commonly used spatial analyses. Many online map providers such as Google Maps, Bing Maps and Yahoo Maps present geocoding as one of their basic capabilities. Despite the diversity of geocoding services, users usually face some limitations when they use available online geocoding services. In existing geocoding services, proximity and nearness concept is not modelled appropriately as well as these services search address only by address matching based on descriptive data. In addition there are also some limitations in display searching results. Resolving these limitations can enhance efficiency of the existing geocoding services. This paper proposes the idea of integrating fuzzy technique with geocoding process to resolve these limitations. In order to implement the proposed method, a web-based system is designed. In proposed method, nearness to places is defined by fuzzy membership functions and multiple fuzzy distance maps are created. Then these fuzzy distance maps are integrated using fuzzy overlay technique for obtain the results. Proposed methods provides different capabilities for users such as ability to search multi-part addresses, searching places based on their location, non-point representation of results as well as displaying search results based on their priority.

  4. A Bloom Filter-Powered Technique Supporting Scalable Semantic Discovery in Data Service Networks

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Shi, R.; Bao, Q.; Lee, T. J.; Ramachandran, R.

    2016-12-01

    More and more Earth data analytics software products are published onto the Internet as a service, in the format of either heavyweight WSDL service or lightweight RESTful API. Such reusable data analytics services form a data service network, which allows Earth scientists to compose (mashup) services into value-added ones. Therefore, it is important to have a technique that is capable of helping Earth scientists quickly identify appropriate candidate datasets and services in the global data service network. Most existing services discovery techniques, however, mainly rely on syntax or semantics-based service matchmaking between service requests and available services. Since the scale of the data service network is increasing rapidly, the run-time computational cost will soon become a bottleneck. To address this issue, this project presents a way of applying network routing mechanism to facilitate data service discovery in a service network, featuring scalability and performance. Earth data services are automatically annotated in Web Ontology Language for Services (OWL-S) based on their metadata, semantic information, and usage history. Deterministic Annealing (DA) technique is applied to dynamically organize annotated data services into a hierarchical network, where virtual routers are created to represent semantic local network featuring leading terms. Afterwards Bloom Filters are generated over virtual routers. A data service search request is transformed into a network routing problem in order to quickly locate candidate services through network hierarchy. A neural network-powered technique is applied to assure network address encoding and routing performance. A series of empirical study has been conducted to evaluate the applicability and effectiveness of the proposed approach.

  5. What We've Learned From Doing Usability Testing on OpenURL Resolvers and Federated Search Engines

    ERIC Educational Resources Information Center

    Cervone, Frank

    2005-01-01

    OpenURL resolvers and federated search engines are important new services in the library field. For some librarians, these services may seem "old hat" by now, but for the majority these services are still in the early stages of implementation or planning. In many cases, these two services are offered as a seamlessly integrated whole.…

  6. How to achieve universal coverage of cataract surgical services in developing countries: lessons from systematic reviews of other services.

    PubMed

    Blanchet, Karl; Gordon, Iris; Gilbert, Clare E; Wormald, Richard; Awan, Haroon

    2012-12-01

    Since the Declaration of Alma Ata, universal coverage has been at the heart of international health. The purpose of this study was to review the evidence on factors and interventions which are effective in promoting coverage and access to cataract and other health services, focusing on developing countries. A thorough literature search for systematic reviews was conducted. Information resources searched were Medline, The Cochrane Library and the Health System Evidence database. Medline was searched from January 1950 to June 2010. The Cochrane Library search consisted of identifying all systematic reviews produced by the Cochrane Eyes and Vision Group and the Cochrane Effective Practice and Organisation of Care. These reviews were assessed for potential inclusion in the review. The Health Systems Evidence database hosted by MacMaster University was searched to identify overviews of systematic reviews. No reviews met the inclusion criteria for cataract surgery. The literature search on other health sectors identified 23 systematic reviews providing robust evidence on the main factors facilitating universal coverage. The main enabling factors influencing access to services in developing countries were peer education, the deployment of staff to rural areas, task shifting, integration of services, supervision of health staff, eliminating user fees and scaling up of health insurance schemes. There are significant research gaps in eye care. There is a pressing need for further high quality primary research on health systems-related factors to understand how the delivery of eye care services and health systems' capacities are interrelated.

  7. Privacy-preserving search for chemical compound databases.

    PubMed

    Shimizu, Kana; Nuida, Koji; Arai, Hiromi; Mitsunari, Shigeo; Attrapadung, Nuttapong; Hamada, Michiaki; Tsuda, Koji; Hirokawa, Takatsugu; Sakuma, Jun; Hanaoka, Goichiro; Asai, Kiyoshi

    2015-01-01

    Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information.

  8. Privacy-preserving search for chemical compound databases

    PubMed Central

    2015-01-01

    Background Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. Results In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. Conclusion We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information. PMID:26678650

  9. Optimally setting up directed searches for continuous gravitational waves in Advanced LIGO O1 data

    NASA Astrophysics Data System (ADS)

    Ming, Jing; Papa, Maria Alessandra; Krishnan, Badri; Prix, Reinhard; Beer, Christian; Zhu, Sylvia J.; Eggenstein, Heinz-Bernd; Bock, Oliver; Machenschalk, Bernd

    2018-02-01

    In this paper we design a search for continuous gravitational waves from three supernova remnants: Vela Jr., Cassiopeia A (Cas A) and G347.3. These systems might harbor rapidly rotating neutron stars emitting quasiperiodic gravitational radiation detectable by the advanced LIGO detectors. Our search is designed to use the volunteer computing project Einstein@Home for a few months and assumes the sensitivity and duty cycles of the advanced LIGO detectors during their first science run. For all three supernova remnants, the sky positions of their central compact objects are well known but the frequency and spin-down rates of the neutron stars are unknown which makes the searches computationally limited. In a previous paper we have proposed a general framework for deciding on what target we should spend computational resources and in what proportion, what frequency and spin-down ranges we should search for every target, and with what search setup. Here we further expand this framework and apply it to design a search directed at detecting continuous gravitational wave signals from the most promising three supernova remnants identified as such in the previous work. Our optimization procedure yields broad frequency and spin-down searches for all three objects, at an unprecedented level of sensitivity: The smallest detectable gravitational wave strain h0 for Cas A is expected to be 2 times smaller than the most sensitive upper limits published to date, and our proposed search, which was set up and ran on the volunteer computing project Einstein@Home, covers a much larger frequency range.

  10. Weekly Surveillance Reports - MeCDC; DHHS Maine

    Science.gov Websites

    Division of the Maine Department of Health and Human Services Contact EPI | News | Online services | Publications | Subject index Search EPI Search Maine CDC Home Health Topics A-Z Data/Reports For Health Care Curricula Train the Trainer School Health Videos Information for Laboratories 211 logo Social Services Help

  11. Policy Manual for a Computerized Search Service in an Academic Library.

    ERIC Educational Resources Information Center

    Jackson, William J.

    This proposed policy manual for the computerized information retrieval service of the University of Houston System outlines policies for specific elements of its operation: (1) users--who is/is not eligible for service and for equipment use; (2) cost--rates charged; (3) responsibilities of searchers--maintenance of searching skills, scheduling of…

  12. NWS Marine, Tropical, and Tsunami Services Branch Feedback

    Science.gov Websites

    Service NWS logo - Click to go to the NWS homepage Marine Forecasts Marine Forecasts Home News Organization Search Landlubber's forecast: "City, St" or zip code (Pan/Zoom for Marine) Search by Office Marine, Tropical, and Tsunami Services Branch Items of Interest Marine Forecasts Text, Graphic

  13. A scoping review of cloud computing in healthcare.

    PubMed

    Griebel, Lena; Prokosch, Hans-Ulrich; Köpcke, Felix; Toddenroth, Dennis; Christoph, Jan; Leb, Ines; Engel, Igor; Sedlmayr, Martin

    2015-03-19

    Cloud computing is a recent and fast growing area of development in healthcare. Ubiquitous, on-demand access to virtually endless resources in combination with a pay-per-use model allow for new ways of developing, delivering and using services. Cloud computing is often used in an "OMICS-context", e.g. for computing in genomics, proteomics and molecular medicine, while other field of application still seem to be underrepresented. Thus, the objective of this scoping review was to identify the current state and hot topics in research on cloud computing in healthcare beyond this traditional domain. MEDLINE was searched in July 2013 and in December 2014 for publications containing the terms "cloud computing" and "cloud-based". Each journal and conference article was categorized and summarized independently by two researchers who consolidated their findings. 102 publications have been analyzed and 6 main topics have been found: telemedicine/teleconsultation, medical imaging, public health and patient self-management, hospital management and information systems, therapy, and secondary use of data. Commonly used features are broad network access for sharing and accessing data and rapid elasticity to dynamically adapt to computing demands. Eight articles favor the pay-for-use characteristics of cloud-based services avoiding upfront investments. Nevertheless, while 22 articles present very general potentials of cloud computing in the medical domain and 66 articles describe conceptual or prototypic projects, only 14 articles report from successful implementations. Further, in many articles cloud computing is seen as an analogy to internet-/web-based data sharing and the characteristics of the particular cloud computing approach are unfortunately not really illustrated. Even though cloud computing in healthcare is of growing interest only few successful implementations yet exist and many papers just use the term "cloud" synonymously for "using virtual machines" or "web-based" with no described benefit of the cloud paradigm. The biggest threat to the adoption in the healthcare domain is caused by involving external cloud partners: many issues of data safety and security are still to be solved. Until then, cloud computing is favored more for singular, individual features such as elasticity, pay-per-use and broad network access, rather than as cloud paradigm on its own.

  14. An effective support system of emergency medical services with tablet computers.

    PubMed

    Yamada, Kosuke C; Inoue, Satoshi; Sakamoto, Yuichiro

    2015-02-27

    There were over 5,000,000 ambulance dispatches during 2010 in Japan, and the time for transportation has been increasing, it took over 37 minutes from dispatch to the hospitals. A way to reduce transportation time by ambulance is to shorten the time of searching for an appropriate facility/hospital during the prehospital phase. Although the information system of medical institutions and emergency medical service (EMS) was established in 2003 in Saga Prefecture, Japan, it has not been utilized efficiently. The Saga Prefectural Government renewed the previous system in an effort to make it the real-time support system that can efficiently manage emergency demand and acceptance for the first time in Japan in April 2011. The objective of this study was to evaluate if the new system promotes efficient emergency transportation for critically ill patients and provides valuable epidemiological data. The new system has provided both emergency personnel in the ambulance, or at the scene, and the medical staff in each hospital to be able to share up-to-date information about available hospitals by means of cloud computing. All 55 ambulances in Saga are equipped with tablet computers through third generation/long term evolution networks. When the emergency personnel arrive on the scene and discern the type of patient's illness, they can search for an appropriate facility/hospital with their tablet computer based on the patient's symptoms and available medical specialists. Data were collected prospectively over a three-year period from April 1, 2011 to March 31, 2013. The transportation time by ambulance in Saga was shortened for the first time since the statistics were first kept in 1999; the mean time was 34.3 minutes in 2010 (based on administrative statistics) and 33.9 minutes (95% CI 33.6-34.1) in 2011. The ratio of transportation to the tertiary care facilities in Saga has decreased by 3.12% from the year before, 32.7% in 2010 (regional average) and 29.58% (9085/30,709) in 2011. The system entry completion rate by the emergency personnel was 100.00% (93,110/93,110) and by the medical staff was 46.11% (14,159/30,709) to 47.57% (14,639/30,772) over a three-year period. Finally, the new system reduced the operational costs by 40,000,000 yen (about $400,000 US dollars) a year. The transportation time by ambulance was shorter following the implementation of the tablet computer in the current support system of EMS in Saga Prefecture, Japan. The cloud computing reduced the cost of the EMS system.

  15. Astronomical Software Directory Service

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.; Payne, Harry; Hayes, Jeffrey

    1997-01-01

    With the support of NASA's Astrophysics Data Program (NRA 92-OSSA-15), we have developed the Astronomical Software Directory Service (ASDS): a distributed, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URLs indexed for full-text searching. Users are performing about 400 searches per month. A new aspect of our service is the inclusion of telescope and instrumentation manuals, which prompted us to change the name to the Astronomical Software and Documentation Service. ASDS was originally conceived to serve two purposes: to provide a useful Internet service in an area of expertise of the investigators (astronomical software), and as a research project to investigate various architectures for searching through a set of documents distributed across the Internet. Two of the co-investigators were then installing and maintaining astronomical software as their primary job responsibility. We felt that a service which incorporated our experience in this area would be more useful than a straightforward listing of software packages. The original concept was for a service based on the client/server model, which would function as a directory/referral service rather than as an archive. For performing the searches, we began our investigation with a decision to evaluate the Isite software from the Center for Networked Information Discovery and Retrieval (CNIDR). This software was intended as a replacement for Wide-Area Information Service (WAIS), a client/server technology for performing full-text searches through a set of documents. Isite had some additional features that we considered attractive, and we enjoyed the cooperation of the Isite developers, who were happy to have ASDS as a demonstration project. We ended up staying with the software throughout the project, making modifications to take advantage of new features as they came along, as well as influencing the software development. The Web interface to the search engine is provided by a gateway program written in C++ by a consultant to the project (A. Warnock).

  16. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2008-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.

  17. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet

    2008-01-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfacesmore » then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.« less

  18. Designing the Search Service for Enterprise Portal based on Oracle Universal Content Management

    NASA Astrophysics Data System (ADS)

    Bauer, K. S.; Kuznetsov, D. Y.; Pominov, A. D.

    2017-01-01

    Enterprise Portal is an important part of an organization in informative and innovative space. The portal provides collaboration between employees and the organization. This article gives a valuable background of Enterprise Portal and technologies. The paper presents Oracle WebCenter Portal and UCM Server integration in detail. The focus is on tools for Enterprise Portal and on Search Service in particular. The paper also presents several UML diagrams to describe the use of cases for Search Service and main components of this application.

  19. A novel computational model to probe visual search deficits during motor performance

    PubMed Central

    Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy

    2016-01-01

    Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596

  20. Characterizing internet health information seeking strategies by socioeconomic status: a mixed methods approach.

    PubMed

    Perez, Susan L; Kravitz, Richard L; Bell, Robert A; Chan, Man Shan; Paterniti, Debora A

    2016-08-09

    The Internet is valuable for those with limited access to health care services because of its low cost and wealth of information. Our objectives were to investigate how the Internet is used to obtain health-related information and how individuals with differing socioeconomic resources navigate it when presented with a health decision. Study participants were recruited from public settings and social service agencies. Participants listened to one of two clinical scenarios - consistent with influenza or bacterial meningitis - and then conducted an Internet search. Screen-capture video software captured the Internet search. Participant Internet search strategies were analyzed and coded for pre- and post-Internet search guess at diagnosis and information seeking patterns. Individuals who did not have a college degree and were recruited from locations offering social services were categorized as "lower socioeconomic status" (SES); the remainder was categorized as "higher SES." Participants were 78 Internet health information seekers, ranging from 21-35 years of age, who experienced barriers to accessing health care services. Lower-SES individuals were more likely to use an intuitive, rather than deliberative, approach to Internet health information seeking. Lower- and higher-SES participants did not differ in the tendency to make diagnostic guesses based on Internet searches. Lower-SES participants were more likely than their higher-SES counterparts to narrow the scope of their search. Our findings suggest that individuals with different levels of socioeconomic status vary in the heuristics and search patterns they rely upon to direct their searches. The influence and use of credible information in the process of making a decision is associated with education and prior experiences with healthcare services. Those with limited resources may be disadvantaged when turning to the Internet to make a health decision.

  1. Criteria for Comparing Children's Web Search Tools.

    ERIC Educational Resources Information Center

    Kuntz, Jerry

    1999-01-01

    Presents criteria for evaluating and comparing Web search tools designed for children. Highlights include database size; accountability; categorization; search access methods; help files; spell check; URL searching; links to alternative search services; advertising; privacy policy; and layout and design. (LRW)

  2. The Evolution of Web Searching.

    ERIC Educational Resources Information Center

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  3. Exploring Contextual Models in Chemical Patent Search

    NASA Astrophysics Data System (ADS)

    Urbain, Jay; Frieder, Ophir

    We explore the development of probabilistic retrieval models for integrating term statistics with entity search using multiple levels of document context to improve the performance of chemical patent search. A distributed indexing model was developed to enable efficient named entity search and aggregation of term statistics at multiple levels of patent structure including individual words, sentences, claims, descriptions, abstracts, and titles. The system can be scaled to an arbitrary number of compute instances in a cloud computing environment to support concurrent indexing and query processing operations on large patent collections.

  4. A computing method for spatial accessibility based on grid partition

    NASA Astrophysics Data System (ADS)

    Ma, Linbing; Zhang, Xinchang

    2007-06-01

    An accessibility computing method and process based on grid partition was put forward in the paper. As two important factors impacting on traffic, density of road network and relative spatial resistance for difference land use was integrated into computing traffic cost in each grid. A* algorithms was inducted to searching optimum traffic cost of grids path, a detailed searching process and definition of heuristic evaluation function was described in the paper. Therefore, the method can be implemented more simply and its data source is obtained more easily. Moreover, by changing heuristic searching information, more reasonable computing result can be obtained. For confirming our research, a software package was developed with C# language under ArcEngine9 environment. Applying the computing method, a case study on accessibility of business districts in Guangzhou city was carried out.

  5. A regional technology transfer program. [North Carolina Industrial Applications Center for the Southeast

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The proliferation of online searching capabilities among its industrial clients, changes in marketing staff and direction, use of Dun and Bradstreet marketing service files, growth of the Annual Service Package program, and services delivered to clients at the NASA funded North Carolina Science and Technology Research Center are described. The library search service was reactivated and enlarged, and a survey was conducted on the NC/STRC Technical Bulletin's effectiveness. Several quotations from clients assess the overall value of the Center's services.

  6. Grid Application Meta-Repository System: Repository Interconnectivity and Cross-domain Application Usage in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Tudose, Alexandru; Terstyansky, Gabor; Kacsuk, Peter; Winter, Stephen

    Grid Application Repositories vary greatly in terms of access interface, security system, implementation technology, communication protocols and repository model. This diversity has become a significant limitation in terms of interoperability and inter-repository access. This paper presents the Grid Application Meta-Repository System (GAMRS) as a solution that offers better options for the management of Grid applications. GAMRS proposes a generic repository architecture, which allows any Grid Application Repository (GAR) to be connected to the system independent of their underlying technology. It also presents applications in a uniform manner and makes applications from all connected repositories visible to web search engines, OGSI/WSRF Grid Services and other OAI (Open Archive Initiative)-compliant repositories. GAMRS can also function as a repository in its own right and can store applications under a new repository model. With the help of this model, applications can be presented as embedded in virtual machines (VM) and therefore they can be run in their native environments and can easily be deployed on virtualized infrastructures allowing interoperability with new generation technologies such as cloud computing, application-on-demand, automatic service/application deployments and automatic VM generation.

  7. US Geoscience Information Network, Web Services for Geoscience Information Discovery and Access

    NASA Astrophysics Data System (ADS)

    Richard, S.; Allison, L.; Clark, R.; Coleman, C.; Chen, G.

    2012-04-01

    The US Geoscience information network has developed metadata profiles for interoperable catalog services based on ISO19139 and the OGC CSW 2.0.2. Currently data services are being deployed for the US Dept. of Energy-funded National Geothermal Data System. These services utilize OGC Web Map Services, Web Feature Services, and THREDDS-served NetCDF for gridded datasets. Services and underlying datasets (along with a wide variety of other information and non information resources are registered in the catalog system. Metadata for registration is produced by various workflows, including harvest from OGC capabilities documents, Drupal-based web applications, transformation from tabular compilations. Catalog search is implemented using the ESRI Geoportal open-source server. We are pursuing various client applications to demonstrated discovery and utilization of the data services. Currently operational applications allow catalog search and data acquisition from map services in an ESRI ArcMap extension, a catalog browse and search application built on openlayers and Django. We are developing use cases and requirements for other applications to utilize geothermal data services for resource exploration and evaluation.

  8. 76 FR 21017 - United States v. Google Inc. and ITA Software Inc., Proposed Final Judgment and Competitive...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-14

    .... Google seeks to expand its search services by launching an Internet travel site to offer comparative... and other companies offering travel-related products and services. 14. Metas enable consumers to search for flights but do not offer booking services. When a consumer on a Meta travel site enters a...

  9. 45 CFR 2540.202 - What two search components of the National Service Criminal History Check must I satisfy to...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Criminal History Check must I satisfy to determine an individual's suitability to serve in a covered... Service Criminal History Check must I satisfy to determine an individual's suitability to serve in a... National Service Criminal History Check, which consists of the following two search components: (a) State...

  10. 45 CFR 2540.202 - What two search components of the National Service Criminal History Check must I satisfy to...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Criminal History Check must I satisfy to determine an individual's suitability to serve in a covered... Service Criminal History Check must I satisfy to determine an individual's suitability to serve in a... National Service Criminal History Check, which consists of the following two search components: (a) State...

  11. About | DOE Data Explorer

    Science.gov Websites

    skip to main content DDE Toggle Navigation Home About DDE FAQs DOE Data ID Service Data ID Service Data ID Service Workshops Contact Us dataexplorer Search For Terms: + Advanced Search × Advanced /Simulations Figures/Plots Genome/Genetics Data Interactive Data Map(s) Multimedia Numeric Data Specialized Mix

  12. Performance analysis of parallel branch and bound search with the hypercube architecture

    NASA Technical Reports Server (NTRS)

    Mraz, Richard T.

    1987-01-01

    With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.

  13. An Assessment, Survey, and Systems Engineering Design of Information Sharing and Discovery Systems in a Network-Centric Environment

    DTIC Science & Technology

    2009-12-01

    type of information available through DISA search tools: Centralized Search, Federated Search , and Enterprise Search (Defense Information Systems... Federated Search , and Enterprise 41 Search services. Likewise, EFD and GCDS support COIs in discovering information by making information

  14. Searching for periodic sources with LIGO. II. Hierarchical searches

    NASA Astrophysics Data System (ADS)

    Brady, Patrick R.; Creighton, Teviet

    2000-04-01

    The detection of quasi-periodic sources of gravitational waves requires the accumulation of signal to noise over long observation times. This represents the most difficult data analysis problem facing experimenters with detectors such as those at LIGO. If not removed, Earth-motion induced Doppler modulations and intrinsic variations of the gravitational-wave frequency make the signals impossible to detect. These effects can be corrected (removed) using a parametrized model for the frequency evolution. In a previous paper, we introduced such a model and computed the number of independent parameter space points for which corrections must be applied to the data stream in a coherent search. Since this number increases with the observation time, the sensitivity of a search for continuous gravitational-wave signals is computationally bound when data analysis proceeds at a similar rate to data acquisition. In this paper, we extend the formalism developed by Brady et al. [Phys. Rev. D 57, 2101 (1998)], and we compute the number of independent corrections Np(ΔT,N) required for incoherent search strategies. These strategies rely on the method of stacked power spectra-a demodulated time series is divided into N segments of length ΔT, each segment is Fourier transformed, a power spectrum is computed, and the N spectra are summed up. This method is incoherent; phase information is lost from segment to segment. Nevertheless, power from a signal with fixed frequency (in the corrected time series) is accumulated in a single frequency bin, and amplitude signal to noise accumulates as ~N1/4 (assuming the segment length ΔT is held fixed). For fixed available computing power, there are optimal values for N and ΔT which maximize the sensitivity of a search in which data analysis takes a total time NΔT. We estimate that the optimal sensitivity of an all-sky search that uses incoherent stacks is a factor of 2-4 better than achieved using coherent Fourier transforms, assuming the same available computing power; incoherent methods are computationally efficient at exploring large parameter spaces. We also consider a two-stage hierarchical search in which candidate events from a search using short data segments are followed up in a search using longer data segments. This hierarchical strategy yields a further 20-60 % improvement in sensitivity in all-sky (or directed) searches for old (>=1000 yr) slow (<=200 Hz) pulsars, and for young (>=40 yr) fast (<=1000 Hz) pulsars. Assuming enhanced LIGO detectors (LIGO-II) and 1012 flops of effective computing power, we examine the sensitivity to sources in three specialized classes. A limited area search for pulsars in the Galactic core would detect objects with gravitational ellipticities of ɛ>~5×10-6 at 200 Hz; such limits provide information about the strength of the crust in neutron stars. Gravitational waves emitted by unstable r-modes of newborn neutron stars would be detected out to distances of ~8 Mpc, if the r-modes saturate at a dimensionless amplitude of order unity and an optical supernova provides the position of the source on the sky. In searches targeting low-mass x-ray binary systems (in which accretion-driven spin up is balanced by gravitational-wave spin down), it is important to use information from electromagnetic observations to determine the orbital parameters as accurately as possible. An estimate of the difficulty of these searches suggests that objects with x-ray fluxes exceeding 2×10-8 erg cm-2 s-1 would be detected using the enhanced interferometers in their broadband configuration. This puts Sco X-1 on the verge of detectability in a broadband search; the amplitude signal to noise would be increased by a factor of order ~5-10 by operating the interferometer in a signal-recycled, narrow-band configuration. Further work is needed to determine the optimal search strategy when limited information is available about the frequency evolution of a source in a targeted search.

  15. Building and evaluating an informatics tool to facilitate analysis of a biomedical literature search service in an academic medical center library.

    PubMed

    Hinton, Elizabeth G; Oelschlegel, Sandra; Vaughn, Cynthia J; Lindsay, J Michael; Hurst, Sachiko M; Earl, Martha

    2013-01-01

    This study utilizes an informatics tool to analyze a robust literature search service in an academic medical center library. Structured interviews with librarians were conducted focusing on the benefits of such a tool, expectations for performance, and visual layout preferences. The resulting application utilizes Microsoft SQL Server and .Net Framework 3.5 technologies, allowing for the use of a web interface. Customer tables and MeSH terms are included. The National Library of Medicine MeSH database and entry terms for each heading are incorporated, resulting in functionality similar to searching the MeSH database through PubMed. Data reports will facilitate analysis of the search service.

  16. Efficient QoS-aware Service Composition

    NASA Astrophysics Data System (ADS)

    Alrifai, Mohammad; Risse, Thomas

    Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.

  17. A search asymmetry reversed by figure-ground assignment.

    PubMed

    Humphreys, G W; Müller, H

    2000-05-01

    We report evidence demonstrating that a search asymmetry favoring concave over convex targets can be reversed by altering the figure-ground assignment of edges in shapes. Visual search for a concave target among convex distractors is faster than search for a convex target among concave distractors (a search asymmetry). By using shapes with ambiguous local figure-ground relations, we demonstrated that search can be efficient (with search slopes around 10 ms/item) or inefficient (with search slopes around 30-40 ms/item) with the same stimuli, depending on whether edges are assigned to concave or convex "figures." This assignment process can operate in a top-down manner, according to the task set. The results suggest that attention is allocated to spatial regions following the computation of figure-ground relations in parallel across the elements present. This computation can also be modulated by top-down processes.

  18. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE PAGES

    Yim, Won Cheol; Cushman, John C.

    2017-07-22

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  19. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, Won Cheol; Cushman, John C.

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  20. Einstein-Home search for periodic gravitational waves in early S5 LIGO data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, B. P.; Abbott, R.; Adhikari, R.

    This paper reports on an all-sky search for periodic gravitational waves from sources such as deformed isolated rapidly spinning neutron stars. The analysis uses 840 hours of data from 66 days of the fifth LIGO science run (S5). The data were searched for quasimonochromatic waves with frequencies f in the range from 50 to 1500 Hz, with a linear frequency drift f (measured at the solar system barycenter) in the range -f/{tau}

  1. Use of microcomputers in health and social service applications in developing nations.

    PubMed

    Bertrand, W E

    1987-01-01

    The microcomputer is creating something of a revolution in many developing nations where historically there has been a lack of access to computer power at all levels of the health sector. For the first time, practitioners and researchers, often trained in computer techniques for developing countries, have access through microcomputers to data and information manipulation in their local workplace. While the history of microcomputers in such settings is short, this article presents early evidence from several countries which indicates the usefulness of various applications. The majority of the applications reported in the literature from clinical and research laboratories is made up of national data base systems and special studies of morbidity and mortality. Secondary applications, including assistance in biographical searches and word and graphics processing, are also reviewed in this article. A summary of the most utilized microcomputer hardware configurations completes the review.

  2. Distributed data mining on grids: services, tools, and applications.

    PubMed

    Cannataro, Mario; Congiusta, Antonio; Pugliese, Andrea; Talia, Domenico; Trunfio, Paolo

    2004-12-01

    Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.

  3. Rapid protein alignment in the cloud: HAMOND combines fast DIAMOND alignments with Hadoop parallelism.

    PubMed

    Yu, Jia; Blom, Jochen; Sczyrba, Alexander; Goesmann, Alexander

    2017-09-10

    The introduction of next generation sequencing has caused a steady increase in the amounts of data that have to be processed in modern life science. Sequence alignment plays a key role in the analysis of sequencing data e.g. within whole genome sequencing or metagenome projects. BLAST is a commonly used alignment tool that was the standard approach for more than two decades, but in the last years faster alternatives have been proposed including RapSearch, GHOSTX, and DIAMOND. Here we introduce HAMOND, an application that uses Apache Hadoop to parallelize DIAMOND computation in order to scale-out the calculation of alignments. HAMOND is fault tolerant and scalable by utilizing large cloud computing infrastructures like Amazon Web Services. HAMOND has been tested in comparative genomics analyses and showed promising results both in efficiency and accuracy. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  4. Designing learning management system interoperability in semantic web

    NASA Astrophysics Data System (ADS)

    Anistyasari, Y.; Sarno, R.; Rochmawati, N.

    2018-01-01

    The extensive adoption of learning management system (LMS) has set the focus on the interoperability requirement. Interoperability is the ability of different computer systems, applications or services to communicate, share and exchange data, information, and knowledge in a precise, effective and consistent way. Semantic web technology and the use of ontologies are able to provide the required computational semantics and interoperability for the automation of tasks in LMS. The purpose of this study is to design learning management system interoperability in the semantic web which currently has not been investigated deeply. Moodle is utilized to design the interoperability. Several database tables of Moodle are enhanced and some features are added. The semantic web interoperability is provided by exploited ontology in content materials. The ontology is further utilized as a searching tool to match user’s queries and available courses. It is concluded that LMS interoperability in Semantic Web is possible to be performed.

  5. On the predictability of protein database search complexity and its relevance to optimization of distributed searches.

    PubMed

    Deciu, Cosmin; Sun, Jun; Wall, Mark A

    2007-09-01

    We discuss several aspects related to load balancing of database search jobs in a distributed computing environment, such as Linux cluster. Load balancing is a technique for making the most of multiple computational resources, which is particularly relevant in environments in which the usage of such resources is very high. The particular case of the Sequest program is considered here, but the general methodology should apply to any similar database search program. We show how the runtimes for Sequest searches of tandem mass spectral data can be predicted from profiles of previous representative searches, and how this information can be used for better load balancing of novel data. A well-known heuristic load balancing method is shown to be applicable to this problem, and its performance is analyzed for a variety of search parameters.

  6. 45 CFR 612.10 - Fees

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION AVAILABILITY OF RECORDS AND... time) the charge is $11.50 for each quarter hour. (iii) Computer searches of records. NSF will charge... computer system(s) for that portion of operating time that is directly attributable to searching for...

  7. 45 CFR 612.10 - Fees

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION AVAILABILITY OF RECORDS AND... time) the charge is $11.50 for each quarter hour. (iii) Computer searches of records. NSF will charge... computer system(s) for that portion of operating time that is directly attributable to searching for...

  8. 45 CFR 612.10 - Fees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION AVAILABILITY OF RECORDS AND... is $7.50 for each quarter hour. (iii) Computer searches of records. NSF will charge at the actual direct cost of conducting the search. This will include the cost of computer operations for that portion...

  9. 45 CFR 612.10 - Fees.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION AVAILABILITY OF RECORDS AND... is $7.50 for each quarter hour. (iii) Computer searches of records. NSF will charge at the actual direct cost of conducting the search. This will include the cost of computer operations for that portion...

  10. 45 CFR 612.10 - Fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION AVAILABILITY OF RECORDS AND... is $7.50 for each quarter hour. (iii) Computer searches of records. NSF will charge at the actual direct cost of conducting the search. This will include the cost of computer operations for that portion...

  11. Chemicals identified in feral and food animals: a data base. First annual report, October 1981. Volume I. Records 1-532

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cone, M.V.; Faust, R.A.; Baldauf, M.F.

    This data file is a companion to Chemicals Identified in Human Biological Media, A Data Base, and follows basically the same format. The data base on human burden is in its third year of publication. This is the first annual report for the feral and food animal file. Data were obtained primarily from the open literature through manual searches (retrospective to 1979) of the journals listed in Appendix A. The data base now contains information on 60 different substances. Chemicals are listed by Chemical Abstracts Service (CAS) registry numbers and preferred names in Appendix B. For the user's convenience, cross-referencedmore » chemical lists of CAS preferred and common names are provided in Appendix C. The animals, tissues, and body fluids found to be contaminated by these chemicals are listed in Appendix D. The data base is published annually in tabular format with indices and chemical listings that allow specific searching. A limited number of custom computer searches of the data base are available in special cases when the published format does not allow for retrieval of needed information.« less

  12. The fast algorithm of spark in compressive sensing

    NASA Astrophysics Data System (ADS)

    Xie, Meihua; Yan, Fengxia

    2017-01-01

    Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.

  13. Semantic Integration for Marine Science Interoperability Using Web Technologies

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Bermudez, L.; Graybeal, J.; Isenor, A. W.

    2008-12-01

    The Marine Metadata Interoperability Project, MMI (http://marinemetadata.org) promotes the exchange, integration, and use of marine data through enhanced data publishing, discovery, documentation, and accessibility. A key effort is the definition of an Architectural Framework and Operational Concept for Semantic Interoperability (http://marinemetadata.org/sfc), which is complemented with the development of tools that realize critical use cases in semantic interoperability. In this presentation, we describe a set of such Semantic Web tools that allow performing important interoperability tasks, ranging from the creation of controlled vocabularies and the mapping of terms across multiple ontologies, to the online registration, storage, and search services needed to work with the ontologies (http://mmisw.org). This set of services uses Web standards and technologies, including Resource Description Framework (RDF), Web Ontology language (OWL), Web services, and toolkits for Rich Internet Application development. We will describe the following components: MMI Ontology Registry: The MMI Ontology Registry and Repository provides registry and storage services for ontologies. Entries in the registry are associated with projects defined by the registered users. Also, sophisticated search functions, for example according to metadata items and vocabulary terms, are provided. Client applications can submit search requests using the WC3 SPARQL Query Language for RDF. Voc2RDF: This component converts an ASCII comma-delimited set of terms and definitions into an RDF file. Voc2RDF facilitates the creation of controlled vocabularies by using a simple form-based user interface. Created vocabularies and their descriptive metadata can be submitted to the MMI Ontology Registry for versioning and community access. VINE: The Vocabulary Integration Environment component allows the user to map vocabulary terms across multiple ontologies. Various relationships can be established, for example exactMatch, narrowerThan, and subClassOf. VINE can compute inferred mappings based on the given associations. Attributes about each mapping, like comments and a confidence level, can also be included. VINE also supports registering and storing resulting mapping files in the Ontology Registry. The presentation will describe the application of semantic technologies in general, and our planned applications in particular, to solve data management problems in the marine and environmental sciences.

  14. Development and Implementation of Kumamoto Technopolis Regional Database T-KIND

    NASA Astrophysics Data System (ADS)

    Onoue, Noriaki

    T-KIND (Techno-Kumamoto Information Network for Data-Base) is a system for effectively searching information of technology, human resources and industries which are necessary to realize Kumamoto Technopolis. It is composed of coded database, image database and LAN inside technoresearch park which is the center of R & D in the Technopolis. It constructs on-line system by networking general-purposed computers, minicomputers, optical disk file systems and so on, and provides the service through public telephone line. Two databases are now available on enterprise information and human resource information. The former covers about 4,000 enterprises, and the latter does about 2,000 persons.

  15. The chief information officer--capturing healthcare's rare bird.

    PubMed

    Krinsky, M L

    1986-08-01

    While we occasionally conducted MIS executive searches during the 1970s, the recent pace has quickened substantially. Healthcare corporations need the MIS executive or CIO to keep the organization technologically and managerially current. Downsizing of acute-care facilities, expansion of outpatient services and creation of new programs have put a premium on current, computer-generated data. Skilled managers must rely on an efficient, flexible data processing department to evaluate options and make decisions about corporate strategy and program development. A presentable, articulate, personable MIS executive is a key ingredient in a successful management team. The position will continue to grow in importance and prominence in the fast-changing healthcare delivery industry.

  16. OntologyWidget – a reusable, embeddable widget for easily locating ontology terms

    PubMed Central

    Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, JH Pate; Ball, Catherine A; Sherlock, Gavin

    2007-01-01

    Background Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. Results We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website [1]. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat [2] on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. Conclusion We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website [1], as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from . PMID:17854506

  17. OntologyWidget - a reusable, embeddable widget for easily locating ontology terms.

    PubMed

    Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, J H Pate; Ball, Catherine A; Sherlock, Gavin

    2007-09-13

    Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website 1. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat 2 on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website 1, as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from http://smd.stanford.edu/ontologyWidget/.

  18. 45 CFR 2551.27 - What two search components of the National Service Criminal History Check must I satisfy to...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Criminal History Check must I satisfy to determine an individual's suitability to serve in a covered... Sponsor § 2551.27 What two search components of the National Service Criminal History Check must I satisfy... prohibited by State law, that you conduct and document a National Service Criminal History Check, which...

  19. 45 CFR 2551.27 - What two search components of the National Service Criminal History Check must I satisfy to...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Criminal History Check must I satisfy to determine an individual's suitability to serve in a covered... Sponsor § 2551.27 What two search components of the National Service Criminal History Check must I satisfy... prohibited by State law, that you conduct and document a National Service Criminal History Check, which...

  20. Environmental Mission Impact Assessment

    DTIC Science & Technology

    2008-01-01

    System Agency’s (DISA) Federated Search service. The mission impacts can be generated for a general rectangular area, or generated for routes, route...that respond to queries (format- ted according to DISA’s Federated Search specifi- FIGURE 2 EVIS service-oriented architecture design, illustrating the

  1. [Eye movement study in multiple object search process].

    PubMed

    Xu, Zhaofang; Liu, Zhongqi; Wang, Xingwei; Zhang, Xin

    2017-04-01

    The aim of this study is to investigate the search time regulation of objectives and eye movement behavior characteristics in the multi-objective visual search. The experimental task was accomplished with computer programming and presented characters on a 24 inch computer display. The subjects were asked to search three targets among the characters. Three target characters in the same group were of high similarity degree while those in different groups of target characters and distraction characters were in different similarity degrees. We recorded the search time and eye movement data through the whole experiment. It could be seen from the eye movement data that the quantity of fixation points was large when the target characters and distraction characters were similar. There were three kinds of visual search patterns for the subjects including parallel search, serial search, and parallel-serial search. In addition, the last pattern had the best search performance among the three search patterns, that is, the subjects who used parallel-serial search pattern spent shorter time finding the target. The order that the targets presented were able to affect the search performance significantly; and the similarity degree between target characters and distraction characters could also affect the search performance.

  2. Measuring and Evaluating TCP Splitting for Cloud Services

    NASA Astrophysics Data System (ADS)

    Pathak, Abhinav; Wang, Y. Angela; Huang, Cheng; Greenberg, Albert; Hu, Y. Charlie; Kern, Randy; Li, Jin; Ross, Keith W.

    In this paper, we examine the benefits of split-TCP proxies, deployed in an operational world-wide network, for accelerating cloud services. We consider a fraction of a network consisting of a large number of satellite datacenters, which host split-TCP proxies, and a smaller number of mega datacenters, which ultimately perform computation or provide storage. Using web search as an exemplary case study, our detailed measurements reveal that a vanilla TCP splitting solution deployed at the satellite DCs reduces the 95 th percentile of latency by as much as 43% when compared to serving queries directly from the mega DCs. Through careful dissection of the measurement results, we characterize how individual components, including proxy stacks, network protocols, packet losses and network load, can impact the latency. Finally, we shed light on further optimizations that can fully realize the potential of the TCP splitting solution.

  3. Scientific Data Storage for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Readey, J.

    2014-12-01

    Traditionally data storage used for geophysical software systems has centered on file-based systems and libraries such as NetCDF and HDF5. In contrast cloud based infrastructure providers such as Amazon AWS, Microsoft Azure, and the Google Cloud Platform generally provide storage technologies based on an object based storage service (for large binary objects) complemented by a database service (for small objects that can be represented as key-value pairs). These systems have been shown to be highly scalable, reliable, and cost effective. We will discuss a proposed system that leverages these cloud-based storage technologies to provide an API-compatible library for traditional NetCDF and HDF5 applications. This system will enable cloud storage suitable for geophysical applications that can scale up to petabytes of data and thousands of users. We'll also cover other advantages of this system such as enhanced metadata search.

  4. A Framework for Integrating Oceanographic Data Repositories

    NASA Astrophysics Data System (ADS)

    Rozell, E.; Maffei, A. R.; Beaulieu, S. E.; Fox, P. A.

    2010-12-01

    Oceanographic research covers a broad range of science domains and requires a tremendous amount of cross-disciplinary collaboration. Advances in cyberinfrastructure are making it easier to share data across disciplines through the use of web services and community vocabularies. Best practices in the design of web services and vocabularies to support interoperability amongst science data repositories are only starting to emerge. Strategic design decisions in these areas are crucial to the creation of end-user data and application integration tools. We present S2S, a novel framework for deploying customizable user interfaces to support the search and analysis of data from multiple repositories. Our research methods follow the Semantic Web methodology and technology development process developed by Fox et al. This methodology stresses the importance of close scientist-technologist interactions when developing scientific use cases, keeping the project well scoped and ensuring the result meets a real scientific need. The S2S framework motivates the development of standardized web services with well-described parameters, as well as the integration of existing web services and applications in the search and analysis of data. S2S also encourages the use and development of community vocabularies and ontologies to support federated search and reduce the amount of domain expertise required in the data discovery process. S2S utilizes the Web Ontology Language (OWL) to describe the components of the framework, including web service parameters, and OpenSearch as a standard description for web services, particularly search services for oceanographic data repositories. We have created search services for an oceanographic metadata database, a large set of quality-controlled ocean profile measurements, and a biogeographic search service. S2S provides an application programming interface (API) that can be used to generate custom user interfaces, supporting data and application integration across these repositories and other web resources. Although initially targeted towards a general oceanographic audience, the S2S framework shows promise in many science domains, inspired in part by the broad disciplinary coverage of oceanography. This presentation will cover the challenges addressed by the S2S framework, the research methods used in its development, and the resulting architecture for the system. It will demonstrate how S2S is remarkably extensible, and can be generalized to many science domains. Given these characteristics, the framework can simplify the process of data discovery and analysis for the end user, and can help to shift the responsibility of search interface development away from data managers.

  5. 32 CFR 518.18 - Judicial actions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... procedures used to search for the requested records, (manual search of records, computer database search, etc... deliberate consideration of the institutional, commercial, and personal privacy interests that could be...

  6. 32 CFR 518.18 - Judicial actions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... procedures used to search for the requested records, (manual search of records, computer database search, etc... deliberate consideration of the institutional, commercial, and personal privacy interests that could be...

  7. 32 CFR 518.18 - Judicial actions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... procedures used to search for the requested records, (manual search of records, computer database search, etc... deliberate consideration of the institutional, commercial, and personal privacy interests that could be...

  8. The Climate-G testbed: towards a large scale data sharing environment for climate change

    NASA Astrophysics Data System (ADS)

    Aloisio, G.; Fiore, S.; Denvil, S.; Petitdidier, M.; Fox, P.; Schwichtenberg, H.; Blower, J.; Barbera, R.

    2009-04-01

    The Climate-G testbed provides an experimental large scale data environment for climate change addressing challenging data and metadata management issues. The main scope of Climate-G is to allow scientists to carry out geographical and cross-institutional climate data discovery, access, visualization and sharing. Climate-G is a multidisciplinary collaboration involving both climate and computer scientists and it currently involves several partners such as: Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), Institut Pierre-Simon Laplace (IPSL), Fraunhofer Institut für Algorithmen und Wissenschaftliches Rechnen (SCAI), National Center for Atmospheric Research (NCAR), University of Reading, University of Catania and University of Salento. To perform distributed metadata search and discovery, we adopted a CMCC metadata solution (which provides a high level of scalability, transparency, fault tolerance and autonomy) leveraging both on P2P and grid technologies (GRelC Data Access and Integration Service). Moreover, data are available through OPeNDAP/THREDDS services, Live Access Server as well as the OGC compliant Web Map Service and they can be downloaded, visualized, accessed into the proposed environment through the Climate-G Data Distribution Centre (DDC), the web gateway to the Climate-G digital library. The DDC is a data-grid portal allowing users to easily, securely and transparently perform search/discovery, metadata management, data access, data visualization, etc. Godiva2 (integrated into the DDC) displays 2D maps (and animations) and also exports maps for display on the Google Earth virtual globe. Presently, Climate-G publishes (through the DDC) about 2TB of data related to the ENSEMBLES project (also including distributed replicas of data) as well as to the IPCC AR4. The main results of the proposed work are: wide data access/sharing environment for climate change; P2P/grid metadata approach; production-level Climate-G DDC; high quality tools for data visualization; metadata search/discovery across several countries/institutions; open environment for climate change data sharing.

  9. In Search of Speedier Searches.

    ERIC Educational Resources Information Center

    Peterson, Ivars

    1984-01-01

    Methods to make computer searching as simple and efficient as possible have led to the development of various data structures. Data structures specify the items involved in searching and what can be done to them. The nature and advantages of using "self-adjusting" data structures (self-adjusting binary search trees) are discussed. (JN)

  10. The Mercury System: Embedding Computation into Disk Drives

    DTIC Science & Technology

    2004-08-20

    enabling technologies to build extremely fast data search engines . We do this by moving the search closer to the data, and performing it in hardware...engine searches in parallel across a disk or disk surface 2. System Parallelism: Searching is off-loaded to search engines and main processor can

  11. Comparison of methods for the detection of gravitational waves from unknown neutron stars

    NASA Astrophysics Data System (ADS)

    Walsh, S.; Pitkin, M.; Oliver, M.; D'Antonio, S.; Dergachev, V.; Królak, A.; Astone, P.; Bejger, M.; Di Giovanni, M.; Dorosh, O.; Frasca, S.; Leaci, P.; Mastrogiovanni, S.; Miller, A.; Palomba, C.; Papa, M. A.; Piccinni, O. J.; Riles, K.; Sauter, O.; Sintes, A. M.

    2016-12-01

    Rapidly rotating neutron stars are promising sources of continuous gravitational wave radiation for the LIGO and Virgo interferometers. The majority of neutron stars in our galaxy have not been identified with electromagnetic observations. All-sky searches for isolated neutron stars offer the potential to detect gravitational waves from these unidentified sources. The parameter space of these blind all-sky searches, which also cover a large range of frequencies and frequency derivatives, presents a significant computational challenge. Different methods have been designed to perform these searches within acceptable computational limits. Here we describe the first benchmark in a project to compare the search methods currently available for the detection of unknown isolated neutron stars. The five methods compared here are individually referred to as the PowerFlux, sky Hough, frequency Hough, Einstein@Home, and time domain F -statistic methods. We employ a mock data challenge to compare the ability of each search method to recover signals simulated assuming a standard signal model. We find similar performance among the four quick-look search methods, while the more computationally intensive search method, Einstein@Home, achieves up to a factor of two higher sensitivity. We find that the absence of a second derivative frequency in the search parameter space does not degrade search sensitivity for signals with physically plausible second derivative frequencies. We also report on the parameter estimation accuracy of each search method, and the stability of the sensitivity in frequency and frequency derivative and in the presence of detector noise.

  12. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  13. Search systems and computer-implemented search methods

    DOEpatents

    Payne, Deborah A.; Burtner, Edwin R.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2017-03-07

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  14. Search systems and computer-implemented search methods

    DOEpatents

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  15. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework

    PubMed Central

    2012-01-01

    Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909

  16. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    PubMed

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  17. Models of Dynamic Relations Among Service Activities, System State and Service Quality on Computer and Network Systems

    DTIC Science & Technology

    2010-01-01

    Service quality on computer and network systems has become increasingly important as many conventional service transactions are moved online. Service quality of computer and network services can be measured by the performance of the service process in throughput, delay, and so on. On a computer and network system, competing service requests of users and associated service activities change the state of limited system resources which in turn affects the achieved service ...relations of service activities, system state and service

  18. 47 CFR 80.1125 - Search and rescue coordinating communications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Search and rescue coordinating communications. 80.1125 Section 80.1125 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Global Maritime Distress and Safety System (GMDSS...

  19. 41 CFR 105-60.305-5 - Searches.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false Searches. 105-60.305-5 Section 105-60.305-5 Public Contracts and Property Management Federal Property Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION Regional Offices-General Services Administration 60...

  20. 41 CFR 105-60.305-5 - Searches.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 41 Public Contracts and Property Management 3 2012-01-01 2012-01-01 false Searches. 105-60.305-5 Section 105-60.305-5 Public Contracts and Property Management Federal Property Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION Regional Offices-General Services Administration 60...

  1. 41 CFR 105-60.305-5 - Searches.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false Searches. 105-60.305-5 Section 105-60.305-5 Public Contracts and Property Management Federal Property Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION Regional Offices-General Services Administration 60...

  2. Modern Techniques for Searching the Chemical Literature.

    ERIC Educational Resources Information Center

    Holm, Bart E.

    The chemists' information needs are for current awareness, selective dissemination, and retrospective search services, of research, development, engineering, production, and marketing information located internally or externally, and contained in journals, patents, theses, reports, data files, information services, and from people. This paper is…

  3. International use of an academic nephrology World Wide Web site: from medical information resource to business tool.

    PubMed

    Abbott, Kevin C; Oliver, David K; Boal, Thomas R; Gadiyak, Grigorii; Boocks, Carl; Yuan, Christina M; Welch, Paul G; Poropatich, Ronald K

    2002-04-01

    Studies of the use of the World Wide Web to obtain medical knowledge have largely focused on patients. In particular, neither the international use of academic nephrology World Wide Web sites (websites) as primary information sources nor the use of search engines (and search strategies) to obtain medical information have been described. Visits ("hits") to the Walter Reed Army Medical Center (WRAMC) Nephrology Service website from April 30, 2000, to March 14, 2001, were analyzed for the location of originating source using Webtrends, and search engines (Google, Lycos, etc.) were analyzed manually for search strategies used. From April 30, 2000 to March 14, 2001, the WRAMC Nephrology Service website received 1,007,103 hits and 12,175 visits. These visits were from 33 different countries, and the most frequent regions were Western Europe, Asia, Australia, the Middle East, Pacific Islands, and South America. The most frequent organization using the site was the military Internet system, followed by America Online and automated search programs of online search engines, most commonly Google. The online lecture series was the most frequently visited section of the website. Search strategies used in search engines were extremely technical. The use of "robots" by standard Internet search engines to locate websites, which may be blocked by mandatory registration, has allowed users worldwide to access the WRAMC Nephrology Service website to answer very technical questions. This suggests that it is being used as an alternative to other primary sources of medical information and that the use of mandatory registration may hinder users from finding valuable sites. With current Internet technology, even a single service can become a worldwide information resource without sacrificing its primary customers.

  4. Development of user-centered interfaces to search the knowledge resources of the Virginia Henderson International Nursing Library.

    PubMed

    Jones, Josette; Harris, Marcelline; Bagley-Thompson, Cheryl; Root, Jane

    2003-01-01

    This poster describes the development of user-centered interfaces in order to extend the functionality of the Virginia Henderson International Nursing Library (VHINL) from library to web based portal to nursing knowledge resources. The existing knowledge structure and computational models are revised and made complementary. Nurses' search behavior is captured and analyzed, and the resulting search models are mapped to the revised knowledge structure and computational model.

  5. Moon Search Algorithms for NASA's Dawn Mission to Asteroid Vesta

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mcfadden, Lucy A.; Skillman, David R.; McLean, Brian; Mutchler, Max; Carsenty, Uri; Palmer, Eric E.

    2012-01-01

    A moon or natural satellite is a celestial body that orbits a planetary body such as a planet, dwarf planet, or an asteroid. Scientists seek understanding the origin and evolution of our solar system by studying moons of these bodies. Additionally, searches for satellites of planetary bodies can be important to protect the safety of a spacecraft as it approaches or orbits a planetary body. If a satellite of a celestial body is found, the mass of that body can also be calculated once its orbit is determined. Ensuring the Dawn spacecraft's safety on its mission to the asteroid Vesta primarily motivated the work of Dawn's Satellite Working Group (SWG) in summer of 2011. Dawn mission scientists and engineers utilized various computational tools and techniques for Vesta's satellite search. The objectives of this paper are to 1) introduce the natural satellite search problem, 2) present the computational challenges, approaches, and tools used when addressing this problem, and 3) describe applications of various image processing and computational algorithms for performing satellite searches to the electronic imaging and computer science community. Furthermore, we hope that this communication would enable Dawn mission scientists to improve their satellite search algorithms and tools and be better prepared for performing the same investigation in 2015, when the spacecraft is scheduled to approach and orbit the dwarf planet Ceres.

  6. Computer use, internet access, and online health searching among Harlem adults.

    PubMed

    Cohall, Alwyn T; Nye, Andrea; Moon-Howard, Joyce; Kukafka, Rita; Dye, Bonnie; Vaughan, Roger D; Northridge, Mary E

    2011-01-01

    Computer use, Internet access, and online searching for health information were assessed toward enhancing Internet use for health promotion. Cross-sectional random digit dial landline phone survey. Eight zip codes that comprised Central Harlem/Hamilton Heights and East Harlem in New York City. Adults 18 years and older (N=646). Demographic characteristics, computer use, Internet access, and online searching for health information. Frequencies for categorical variables and means and standard deviations for continuous variables were calculated and compared with analogous findings reported in national surveys from similar time periods. Among Harlem adults, ever computer use and current Internet use were 77% and 52%, respectively. High-speed home Internet connections were somewhat lower for Harlem adults than for U.S. adults overall (43% vs. 68%). Current Internet users in Harlem were more likely to be younger, white vs. black or Hispanic, better educated, and in better self-reported health than non-current users (p<.01). Of those who reported searching online for health information, 74% sought information on medical problems and thought that information found on the Internet affected the way they eat (47%) or exercise (44%). Many Harlem adults currently use the Internet to search for health information. High-speed connections and culturally relevant materials may facilitate health information searching for underserved groups. Copyright © 2011 by American Journal of Health Promotion, Inc.

  7. A case study in adaptable and reusable infrastructure at the Keck Observatory Archive: VO interfaces, moving targets, and more

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Cohen, Richard W.; Colson, Andrew; Gelino, Christopher R.; Good, John C.; Kong, Mihseh; Laity, Anastasia C.; Mader, Jeffrey A.; Swain, Melanie A.; Tran, Hien D.; Wang, Shin-Ywan

    2016-08-01

    The Keck Observatory Archive (KOA) (https://koa.ipac.caltech.edu) curates all observations acquired at the W. M. Keck Observatory (WMKO) since it began operations in 1994, including data from eight active instruments and two decommissioned instruments. The archive is a collaboration between WMKO and the NASA Exoplanet Science Institute (NExScI). Since its inception in 2004, the science information system used at KOA has adopted an architectural approach that emphasizes software re-use and adaptability. This paper describes how KOA is currently leveraging and extending open source software components to develop new services and to support delivery of a complete set of instrument metadata, which will enable more sophisticated and extensive queries than currently possible. In August 2015, KOA deployed a program interface to discover public data from all instruments equipped with an imaging mode. The interface complies with version 2 of the Simple Imaging Access Protocol (SIAP), under development by the International Virtual Observatory Alliance (IVOA), which defines a standard mechanism for discovering images through spatial queries. The heart of the KOA service is an R-tree-based, database-indexing mechanism prototyped by the Virtual Astronomical Observatory (VAO) and further developed by the Montage Image Mosaic project, designed to provide fast access to large imaging data sets as a first step in creating wide-area image mosaics (such as mosaics of subsets of the 4.7 million images of the SDSS DR9 release). The KOA service uses the results of the spatial R-tree search to create an SQLite data database for further relational filtering. The service uses a JSON configuration file to describe the association between instrument parameters and the service query parameters, and to make it applicable beyond the Keck instruments. The images generated at the Keck telescope usually do not encode the image footprints as WCS fields in the FITS file headers. Because SIAP searches are spatial, much of the effort in developing the program interface involved processing the instrument and telescope parameters to understand how accurately we can derive the WCS information for each instrument. This knowledge is now being fed back into the KOA databases as part of a program to include complete metadata information for all imaging observations. The R-tree program was itself extended to support temporal (in addition to spatial) indexing, in response to requests from the planetary science community for a search engine to discover observations of Solar System objects. With this 3D-indexing scheme, the service performs very fast time and spatial matches between the target ephemerides, obtained from the JPL SPICE service. Our experiments indicate these matches can be more than 100 times faster than when separating temporal and spatial searches. Images of the tracks of the moving targets, overlaid with the image footprints, are computed with a new command-line visualization tool, mViewer, released with the Montage distribution. The service is currently in test and will be released in late summer 2016.

  8. Proposal for a telehealth concept in the translational research model

    PubMed Central

    Silva, Angélica Baptista; Morel, Carlos Médicis; de Moraes, Ilara Hämmerli Sozzi

    2014-01-01

    OBJECTIVE To review the conceptual relationship between telehealth and translational research. METHODS Bibliographical search on telehealth was conducted in the Scopus, Cochrane BVS, LILACS and MEDLINE databases to find experiences of telehealth in conjunction with discussion of translational research in health. The search retrieved eight studies based on analysis of models of the five stages of translational research and the multiple strands of public health policy in the context of telehealth in Brazil. The models were applied to telehealth activities concerning the Network of Human Milk Banks, in the Telemedicine University Network. RESULTS The translational research cycle of human milk collected, stored and distributed presents several integrated telehealth initiatives, such as video conferencing, and software and portals for synthesizing knowledge, composing elements of an information ecosystem, mediated by information and communication technologies in the health system. CONCLUSIONS Telehealth should be composed of a set of activities in a computer mediated network promoting the translation of knowledge between research and health services. PMID:24897057

  9. Exploring Gendered Notions: Gender, Job Hunting and Web Searches

    NASA Astrophysics Data System (ADS)

    Martey, R. M.

    Based on analysis of a series of interviews, this chapter suggests that in looking for jobs online, women confront gendered notions of the Internet as well as gendered notions of the jobs themselves. It argues that the social and cultural contexts of both the search tools and the search tasks should be considered in exploring how Web-based technologies serve women in a job search. For these women, the opportunities and limitations of online job-search tools were intimately related to their personal and social needs, especially needs for part-time work, maternity benefits, and career advancement. Although job-seeking services such as Monster.com were used frequently by most of these women, search services did not completely fulfill all their informational needs, and became an — often frustrating — initial starting point for a job search rather than an end-point.

  10. A Kind of Transformation of Information Service--Science and Technology Novelty Search in Chinese University Libraries

    ERIC Educational Resources Information Center

    Aiguo, Li

    2007-01-01

    Science and Technology Novelty Search (S&TNS) is a special information consultation service developed as part of the Chinese Sci-Tech system. The author introduces the concept of S&TNS, and explains its role, and the role of the university library in the process. A quality control model to improve the quality of service of the S&TNS at…

  11. SLICE/MARC-O: Description of Services. Second Revised Edition.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Libraries, Oklahoma City.

    Following the discussions of: what is SLICE, what is MARC, what is MARC-O, and what is SLICE/MARC-O are descriptions of the five services offered by SLICE/MARC-O. These services are: (1) cataloging data search and print, (2) MARC record and search and copy, (3) standard S.D.I. current awareness, (4) custom S.D.I. current awareness and (5) SLICE…

  12. What the Computer Taught Me About My Students...or Is Binary Search "Natural"?

    ERIC Educational Resources Information Center

    Pasquino, Anne

    1978-01-01

    Several examples of student-written programs "teaching" a computer to guess systematically in finding a number between 0 and 10,000 are illustrated. These lend support to the contention that rather than being a "natural" application, using a binary search is a learned technique. (MN)

  13. Practice and Personhood in Professional Interaction: Social Identities and Information Needs.

    ERIC Educational Resources Information Center

    Mokros, Hartmut B.; And Others

    1995-01-01

    Explores the human aspect of information retrieval by examining the behavior and pronoun use of librarians in the course of communicating with patrons during online computer search interactions. Compares two studies on the conduct of librarians as intermediaries in naturally occurring online computer search interactions. (JMV)

  14. 12 CFR 1402.21 - Categories of requesters-fees.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...

  15. 12 CFR 1402.21 - Categories of requesters-fees.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...

  16. 12 CFR 1402.21 - Categories of requesters-fees.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...

  17. 12 CFR 1402.21 - Categories of requesters-fees.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...

  18. 12 CFR 1402.21 - Categories of requesters-fees.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...

  19. [Migrants' female partners: social image and the search for sexual and reproductive health services].

    PubMed

    Ochoa-Marín, Sandra C; Cristancho-Marulanda, Sergio; González-López, José Rafael

    2011-04-01

    Analysing the self-image and social image of migrants' female partners (MFP) and their relationship with the search for sexual and reproductive health services (SRHS) in communities having a high US migratory intensity index. 60 MFP were subjected to in-depth interviews between October 2004 and May 2005 and 19 semi-structured interviews were held with members of their families, 14 representatives from social organisations, 10 health service representatives and 31 men and women residing in the community. MFP self-image and social image regards women as being "vulnerable", "alone", "lacking sexual partner" and thus being sexually inactive. Consequently, "they must not contract sexually-transmitted diseases (STD), use contraceptives or become pregnant" when their partners are in the USA. The search for SRHS services was found to be related to self-image, social image and the notion of family or social control predominated in the behaviour expected for these women which, in turn, was related to conditions regarding their coexistence (or not) with their families. MFP living with their family or their partner's family were subject to greater "family" control in their search for SRHS services. On the contrary, MFP living alone were subjected to greater "social" control over such process. Sexuallyinactive women's self-image and social image seems to have a bearing on such women's social behaviour and could become an obstacle to the timely search for SRHS services in communities having high migratory intensity.

  20. 75 FR 76494 - Sunshine Act Meeting; Notice

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-08

    ... LEGAL SERVICES CORPORATION Sunshine Act Meeting; Notice TIME AND DATE: The Legal Services Corporation Board of Directors' Search Committee for LSC President (``Search Committee'' or ``Committee'') will meet on December 13, 2010. The meeting will begin at 10 a.m. (Eastern Time) and continue until...

  1. 75 FR 72842 - Sunshine Act Meeting; Notice

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-26

    ... LEGAL SERVICES CORPORATION Sunshine Act Meeting; Notice Time and Date: The Legal Services Corporation Board of Directors' Search Committee for LSC President (``Search Committee'' or ``Committee'') will meet on November 29, 2010. The meeting will begin at 12 p.m. (Eastern Time) and continue until...

  2. Use of the computer and Internet among Italian families: first national study.

    PubMed

    Bricolo, Francesco; Gentile, Douglas A; Smelser, Rachel L; Serpelloni, Giovanni

    2007-12-01

    Although home Internet access has continued to increase, little is known about actual usage patterns in homes. This nationally representative study of over 4,700 Italian households with children measured computer and Internet use of each family member across 3 months. Data on actual computer and Internet usage were collected by Nielsen//NetRatings service and provide national baseline information on several variables for several age groups separately, including children, adolescents, and adult men and women. National averages are shown for the average amount of time spent using computers and on the Web, the percentage of each age group online, and the types of Web sites viewed. Overall, about one-third of children ages 2 to 11, three-fourths of adolescents and adult women, and over four-fifths of adult men access the Internet each month. Children spend an average of 22 hours/month on the computer, with a jump to 87 hours/month for adolescents. Adult women spend less time (about 60 hours/month), and adult men spend more (over 100). The types of Web sites visited are reported, including the top five for each age group. In general, search engines and Web portals are the top sites visited, regardless of age group. These data provide a baseline for comparisons across time and cultures.

  3. Search Alternatives and Beyond

    ERIC Educational Resources Information Center

    Bell, Steven J.

    2006-01-01

    Internet search has become a routine computing activity, with regular visits to a search engine--usually Google--the norm for most people. The vast majority of searchers, as recent studies of Internet search behavior reveal, search only in the most basic of ways and fail to avail themselves of options that could easily and effortlessly improve…

  4. Scripting for Collaborative Search Computer-Supported Classroom Activities

    ERIC Educational Resources Information Center

    Verdugo, Renato; Barros, Leonardo; Albornoz, Daniela; Nussbaum, Miguel; McFarlane, Angela

    2014-01-01

    Searching online is one of the most powerful resources today's students have for accessing information. Searching in groups is a daily practice across multiple contexts; however, the tools we use for searching online do not enable collaborative practices and traditional search models consider a single user navigating online in solitary. This paper…

  5. Facilitating medical information search using Google Glass connected to a content-based medical image retrieval system.

    PubMed

    Widmer, Antoine; Schaer, Roger; Markonis, Dimitrios; Muller, Henning

    2014-01-01

    Wearable computing devices are starting to change the way users interact with computers and the Internet. Among them, Google Glass includes a small screen located in front of the right eye, a camera filming in front of the user and a small computing unit. Google Glass has the advantage to provide online services while allowing the user to perform tasks with his/her hands. These augmented glasses uncover many useful applications, also in the medical domain. For example, Google Glass can easily provide video conference between medical doctors to discuss a live case. Using these glasses can also facilitate medical information search by allowing the access of a large amount of annotated medical cases during a consultation in a non-disruptive fashion for medical staff. In this paper, we developed a Google Glass application able to take a photo and send it to a medical image retrieval system along with keywords in order to retrieve similar cases. As a preliminary assessment of the usability of the application, we tested the application under three conditions (images of the skin; printed CT scans and MRI images; and CT and MRI images acquired directly from an LCD screen) to explore whether using Google Glass affects the accuracy of the results returned by the medical image retrieval system. The preliminary results show that despite minor problems due to the relative stability of the Google Glass, images can be sent to and processed by the medical image retrieval system and similar images are returned to the user, potentially helping in the decision making process.

  6. Searching Harvard Business Review Online. . . Lessons in Searching a Full Text Database.

    ERIC Educational Resources Information Center

    Tenopir, Carol

    1985-01-01

    This article examines the Harvard Business Review Online (HBRO) database (bibliographic description fields, abstracts, extracted information, full text, subject descriptors) and reports on 31 sample HBRO searches conducted in Bibliographic Retrieval Services to test differences between searching full text and searching bibliographic record. Sample…

  7. VITMO - A Powerful Tool to Improve Discovery in the Magnetospheric and Ionosphere-Thermosphere Domains

    NASA Astrophysics Data System (ADS)

    Schaefer, R. K.; Morrison, D.; Potter, M.; Stephens, G.; Barnes, R. J.; Talaat, E. R.; Sarris, T.

    2017-12-01

    With the advent of the NASA Magnetospheric Multiscale Mission and the Van Allen Probes we have space missions that probe the Earth's magnetosphere and radiation belts. These missions fly at far distances from the Earth in contrast to the larger number of near-Earth satellites. Both of the satellites make in situ measurements. Energetic particles flow along magnetic field lines from these measurement locations down to the ionosphere/thermosphere region. Discovering other data that may be used with these satellites is a difficult and complicated process. To solve this problem, we have developed a series of light-weight web services that can provide a new data search capability for the Virtual Ionosphere Thermosphere Mesosphere Observatory (VITMO). The services consist of a database of spacecraft ephemerides and instrument fields of view; an overlap calculator to find times when the fields of view of different instruments intersect; and a magnetic field line tracing service that maps in situ and ground based measurements for a number of magnetic field models and geophysical conditions. These services run in real-time when the user queries for data and allow the non-specialist user to select data that they were previously unable to locate, opening up analysis opportunities beyond the instrument teams and specialists, making it easier for future students who come into the field. Each service on their own provides a useful new capability for virtual observatories; operating together they provide a powerful new search tool. The ephemerides service was built using the Navigation and Ancillary Information Facility (NAIF) SPICE toolkit (http://naif.jpl.nasa.gov/naif/index.html) allowing them to be extended to support any Earth orbiting satellite with the addition of the appropriate SPICE kernels. The overlap calculator uses techniques borrowed from computer graphics to identify overlapping measurements in space and time. The calculator will allow a user defined uncertainty to be selected to allow "near misses" to be found. The magnetic field tracing service will feature a database of pre-calculated field line tracings of ground stations but will also allow dynamic tracing of arbitrary coordinates.

  8. Improving Discoverability Between the Magnetosphere and Ionosphere/Thermosphere Domains

    NASA Astrophysics Data System (ADS)

    Schaefer, R. K.; Morrison, D.; Potter, M.; Barnes, R. J.; Talaat, E. R.; Sarris, T.

    2016-12-01

    With the advent of the NASA Magnetospheric Multiscale Mission and the Van Allen Probes we have space missions that probe the Earth's magnetosphere and radiation belts. These missions fly at far distances from the Earth in contrast to the larger number of near-Earth satellites. Both of the satellites make in situ measurements. Energetic particles flow along magnetic field lines from these measurement locations down to the ionosphere/thermosphere region. Discovering other data that may be used with these satellites is a difficult and complicated process. To solve this problem we have developed a series of light-weight web services that can provide a new data search capability for the Virtual Ionosphere Thermosphere Mesosphere Observatory (VITMO). The services consist of a database of spacecraft ephemerides and instrument fields of view; an overlap calculator to find times when the fields of view of different instruments intersect; and a magnetic field line tracing service that maps in situ and ground based measurements for a number of magnetic field models and geophysical conditions. These services run in real-time when the user queries for data and allow the non-specialist user to select data that they were previously unable to locate, opening up analysis opportunities beyond the instrument teams and specialists. Each service on their own provides a useful new capability for virtual observatories; operating together they will provide a powerful new search tool. The ephemerides service is being built using the Navigation and Ancillary Information Facility (NAIF) SPICE toolkit (http://naif.jpl.nasa.gov) allowing them to be extended to support any Earth orbiting satellite with the addition of the appropriate SPICE kernels. The overlap calculator uses techniques borrowed from computer graphics to identify overlapping measurements in space and time. The calculator will allow a user defined uncertainty to be selected to allow "near misses" to be found. The magnetic field tracing service will feature a database of pre-calculated field line tracings of ground stations but will also allow dynamic tracing of arbitrary coordinates with a user selected choice of magnetic field models.

  9. Genetic Testing Registry

    MedlinePlus

    ... Splign Vector Alignment Search Tool (VAST) All Data & Software Resources... Domains & Structures BioSystems Cn3D Conserved Domain Database (CDD) Conserved Domain Search Service (CD Search) Structure (Molecular Modeling Database) Vector Alignment ...

  10. The EBI search engine: EBI search as a service-making biological data accessible for all.

    PubMed

    Park, Young M; Squizzato, Silvano; Buso, Nicola; Gur, Tamer; Lopez, Rodrigo

    2017-07-03

    We present an update of the EBI Search engine, an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. The interconnectivity that exists between data resources at EMBL-EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types that include nucleotide and protein sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, as well as the life science literature. EBI Search provides a powerful RESTful API that enables its integration into third-party portals, thus providing 'Search as a Service' capabilities, which are the main topic of this article. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Data-driven indexing mechanism for the recognition of polyhedral objects

    NASA Astrophysics Data System (ADS)

    McLean, Stewart; Horan, Peter; Caelli, Terry M.

    1992-02-01

    This paper is concerned with the problem of searching large model databases. To date, most object recognition systems have concentrated on the problem of matching using simple searching algorithms. This is quite acceptable when the number of object models is small. However, in the future, general purpose computer vision systems will be required to recognize hundreds or perhaps thousands of objects and, in such circumstances, efficient searching algorithms will be needed. The problem of searching a large model database is one which must be addressed if future computer vision systems are to be at all effective. In this paper we present a method we call data-driven feature-indexed hypothesis generation as one solution to the problem of searching large model databases.

  12. Enabling Open Research Data Discovery through a Recommender System

    NASA Astrophysics Data System (ADS)

    Devaraju, Anusuriya; Jayasinghe, Gaya; Klump, Jens; Hogan, Dominic

    2017-04-01

    Government agencies, universities, research and nonprofit organizations are increasingly publishing their datasets to promote transparency, induce new research and generate economic value through the development of new products or services. The datasets may be downloaded from various data portals (data repositories) which are general or domain-specific. The Registry of Research Data Repository (re3data.org) lists more than 2500 such data repositories from around the globe. Data portals allow keyword search and faceted navigation to facilitate discovery of research datasets. However, the volume and variety of datasets have made finding relevant datasets more difficult. Common dataset search mechanisms may be time consuming, may produce irrelevant results and are primarily suitable for users who are familiar with the general structure and contents of the respective database. Therefore, we need new approaches to support research data discovery. Recommender systems offer new possibilities for users to find datasets that are relevant to their research interests. This study presents a recommender system developed for the CSIRO Data Access Portal (DAP, http://data.csiro.au). The datasets hosted on the portal are diverse, published by researchers from 13 business units in the organisation. The goal of the study is not to replace the current search mechanisms on the data portal, but rather to extend the data discovery through an exploratory search, in this case by building a recommender system. We adopted a hybrid recommendation approach, comprising content-based filtering and item-item collaborative filtering. The content-based filtering computes similarities between datasets based on metadata such as title, keywords, descriptions, fields of research, location, contributors, etc. The collaborative filtering utilizes user search behaviour and download patterns derived from the server logs to determine similar datasets. Similarities above are then combined with different degrees of importance (weights) to determine the overall data similarity. We determined the similarity weights based on a survey involving 150 users of the portal. The recommender results for a given dataset are accessible programmatically via a RESTful web service. An offline evaluation involving data users demonstrates the ability of the recommender system to discover relevant and 'novel' datasets.

  13. 75 FR 41239 - Sunshine Act; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-15

    ... LEGAL SERVICES CORPORATION Sunshine Act; Notice of Meeting TIME AND DATE: The Legal Services Corporation Board of Directors' Search Committee for LSC President (``Search Committee'' or ``Committee'') will meet on July 20, 2010. The meeting will begin at 4 p.m. (Central Daylight Savings Time) and...

  14. Matching pursuit parallel decomposition of seismic data

    NASA Astrophysics Data System (ADS)

    Li, Chuanhui; Zhang, Fanchang

    2017-07-01

    In order to improve the computation speed of matching pursuit decomposition of seismic data, a matching pursuit parallel algorithm is designed in this paper. We pick a fixed number of envelope peaks from the current signal in every iteration according to the number of compute nodes and assign them to the compute nodes on average to search the optimal Morlet wavelets in parallel. With the help of parallel computer systems and Message Passing Interface, the parallel algorithm gives full play to the advantages of parallel computing to significantly improve the computation speed of the matching pursuit decomposition and also has good expandability. Besides, searching only one optimal Morlet wavelet by every compute node in every iteration is the most efficient implementation.

  15. Cube search, revisited.

    PubMed

    Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth

    2015-03-16

    Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with "equivalent" 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. © 2015 ARVO.

  16. Cube search, revisited

    PubMed Central

    Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth

    2015-01-01

    Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with “equivalent” 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. PMID:25780063

  17. Self-referred whole-body CT imaging: current implications for health care consumers.

    PubMed

    Illes, Judy; Fan, Ellen; Koenig, Barbara A; Raffin, Thomas A; Kann, Dylan; Atlas, Scott W

    2003-08-01

    To conduct an empirical analysis of self-referred whole-body computed tomography (CT) and develop a profile of the geographic and demographic distribution of centers, types of services and modalities, costs, and procedures for reporting results. An analysis was conducted of Web sites for imaging centers accepting self-referred patients identified by two widely used Internet search engines with large indexes. These Web sites were analyzed for geographic location, type of screening center, services, costs, and procedures for managing imaging results. Demographic data were extrapolated for analysis on the basis of center location. Descriptive statistics, such as frequencies, means, SDs, ranges, and CIs, were generated to describe the characteristics of the samples. Data were compared with national norms by using a distribution-free method for calculating a 95% CI (P <.05) for the median. Eighty-eight centers identified with the search methods were widely distributed across the United States, with a concentration on both coasts. Demographic analysis further situated them in areas of the country characterized by a population that consisted largely of European Americans (P <.05) and individuals of higher education (P <.05) and socioeconomic status (P <.05). Forty-seven centers offered whole-body screening; heart and lung examinations were most frequently offered. Procedures for reporting results were highly variable. The geographic distribution of the centers suggests target populations of educated health-conscious consumers who can assume high out-of-pocket costs. Guidelines developed from within the profession and further research are needed to ensure that benefits of these services outweigh risks to individuals and the health care system. Copyright RSNA, 2003.

  18. Data federation strategies for ATLAS using XRootD

    NASA Astrophysics Data System (ADS)

    Gardner, Robert; Campana, Simone; Duckeck, Guenter; Elmsheuser, Johannes; Hanushevsky, Andrew; Hönig, Friedrich G.; Iven, Jan; Legger, Federica; Vukotic, Ilija; Yang, Wei; Atlas Collaboration

    2014-06-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.

  19. Enhancing SAMOS Data Access in DOMS via a Neo4j Property Graph Database.

    NASA Astrophysics Data System (ADS)

    Stallard, A. P.; Smith, S. R.; Elya, J. L.

    2016-12-01

    The Shipboard Automated Meteorological and Oceanographic System (SAMOS) initiative provides routine access to high-quality marine meteorological and near-surface oceanographic observations from research vessels. The Distributed Oceanographic Match-Up Service (DOMS) under development is a centralized service that allows researchers to easily match in situ and satellite oceanographic data from distributed sources to facilitate satellite calibration, validation, and retrieval algorithm development. The service currently uses Apache Solr as a backend search engine on each node in the distributed network. While Solr is a high-performance solution that facilitates creation and maintenance of indexed data, it is limited in the sense that its schema is fixed. The property graph model escapes this limitation by creating relationships between data objects. The authors will present the development of the SAMOS Neo4j property graph database including new search possibilities that take advantage of the property graph model, performance comparisons with Apache Solr, and a vision for graph databases as a storage tool for oceanographic data. The integration of the SAMOS Neo4j graph into DOMS will also be described. Currently, Neo4j contains spatial and temporal records from SAMOS which are modeled into a time tree and r-tree using Graph Aware and Spatial plugin tools for Neo4j. These extensions provide callable Java procedures within CYPHER (Neo4j's query language) that generate in-graph structures. Once generated, these structures can be queried using procedures from these libraries, or directly via CYPHER statements. Neo4j excels at performing relationship and path-based queries, which challenge relational-SQL databases because they require memory intensive joins due to the limitation of their design. Consider a user who wants to find records over several years, but only for specific months. If a traditional database only stores timestamps, this type of query would be complex and likely prohibitively slow. Using the time tree model, one can specify a path from the root to the data which restricts resolutions to certain timeframes (e.g., months). This query can be executed without joins, unions, or other compute-intensive operations, putting Neo4j at a computational advantage to the SQL database alternative.

  20. A Grid Metadata Service for Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni

    2010-05-01

    Critical challenges for climate modeling researchers are strongly connected with the increasingly complex simulation models and the huge quantities of produced datasets. Future trends in climate modeling will only increase computational and storage requirements. For this reason the ability to transparently access to both computational and data resources for large-scale complex climate simulations must be considered as a key requirement for Earth Science and Environmental distributed systems. From the data management perspective (i) the quantity of data will continuously increases, (ii) data will become more and more distributed and widespread, (iii) data sharing/federation will represent a key challenging issue among different sites distributed worldwide, (iv) the potential community of users (large and heterogeneous) will be interested in discovery experimental results, searching of metadata, browsing collections of files, compare different results, display output, etc.; A key element to carry out data search and discovery, manage and access huge and distributed amount of data is the metadata handling framework. What we propose for the management of distributed datasets is the GRelC service (a data grid solution focusing on metadata management). Despite the classical approaches, the proposed data-grid solution is able to address scalability, transparency, security and efficiency and interoperability. The GRelC service we propose is able to provide access to metadata stored in different and widespread data sources (relational databases running on top of MySQL, Oracle, DB2, etc. leveraging SQL as query language, as well as XML databases - XIndice, eXist, and libxml2 based documents, adopting either XPath or XQuery) providing a strong data virtualization layer in a grid environment. Such a technological solution for distributed metadata management leverages on well known adopted standards (W3C, OASIS, etc.); (ii) supports role-based management (based on VOMS), which increases flexibility and scalability; (iii) provides full support for Grid Security Infrastructure, which means (authorization, mutual authentication, data integrity, data confidentiality and delegation); (iv) is compatible with existing grid middleware such as gLite and Globus and finally (v) is currently adopted at the Euro-Mediterranean Centre for Climate Change (CMCC - Italy) to manage the entire CMCC data production activity as well as in the international Climate-G testbed.

  1. Fermilab | Creative Services

    Science.gov Websites

    Search Toggle Fermilab Navbar Toggle Search Search Home Contact Phone Book Fermilab at Work Jobs Book Fermilab at Work For Industry Jobs Interact Facebook Twitter Instagram Google+ YouTube Flickr

  2. NASA's Global Change Master Directory: Discover and Access Earth Science Data Sets, Related Data Services, and Climate Diagnostics

    NASA Astrophysics Data System (ADS)

    Aleman, A.; Olsen, L. M.; Ritz, S.; Stevens, T.; Morahan, M.; Grebas, S. K.

    2011-12-01

    NASA's Global Change Master Directory provides the scientific community with the ability to discover, access, and use Earth science data, data-related services, and climate diagnostics worldwide.The GCMD offers descriptions of Earth science data sets using the Directory Interchange Format (DIF) metadata standard; Earth science related data services are described using the Service Entry Resource Format (SERF); and climate visualizations are described using the Climate Diagnostic (CD) standard. The DIF, SERF and CD standards each capture data attributes used to determine whether a data set, service, or climate visualization is relevant to a user's needs.Metadata fields include: title, summary, science keywords, service keywords, data center, data set citation, personnel, instrument, platform, quality, related URL, temporal and spatial coverage, data resolution and distribution information.In addition, nine valuable sets of controlled vocabularies have been developed to assist users in normalizing the search for data descriptions. An update to the GCMD's search functionality is planned to further capitalize on the controlled vocabularies during database queries.By implementing a dynamic keyword "tree", users will have the ability to search for data sets by combining keywords in new ways.This will allow users to conduct more relevant and efficient database searches to support the free exchange and re-use of Earth science data.

  3. Gazetteer Brokering through Semantic Mediation

    NASA Astrophysics Data System (ADS)

    Hobona, G.; Bermudez, L. E.; Brackin, R.

    2013-12-01

    A gazetteer is a geographical directory containing some information regarding places. It provides names, location and other attributes for places which may include points of interest (e.g. buildings, oilfields and boreholes), and other features. These features can be published via web services conforming to the Gazetteer Application Profile of the Web Feature Service (WFS) standard of the Open Geospatial Consortium (OGC). Against the backdrop of advances in geophysical surveys, there has been a significant increase in the amount of data referenced to locations. Gazetteers services have played a significant role in facilitating access to such data, including through provision of specialized queries such as text, spatial and fuzzy search. Recent developments in the OGC have led to advances in gazetteers such as support for multilingualism, diacritics, and querying via advanced spatial constraints (e.g. search by radial search and nearest neighbor). A challenge remaining however, is that gazetteers produced by different organizations have typically been modeled differently. Inconsistencies from gazetteers produced by different organizations may include naming the same feature in a different way, naming the attributes differently, locating the feature in a different location, and providing fewer or more attributes than the other services. The Gazetteer application profile of the WFS is a starting point to address such inconsistencies by providing a standardized interface based on rules specified in ISO 19112, the international standard for spatial referencing by geographic identifiers. The profile, however, does not provide rules to deal with semantic inconsistencies. The USGS and NGA commissioned research into the potential for a Single Point of Entry Global Gazetteer (SPEGG). The research was conducted by the Cross Community Interoperability thread of the OGC testbed, referenced OWS-9. The testbed prototyped approaches for brokering gazetteers through use of semantic web technologies, including ontologies and a semantic mediator. The semantically-enhanced SPEGG allowed a client to submit a single query (e.g. ';hills') and to retrieve data from two separate gazetteers with different vocabularies (e.g. where one refers to ';summits' another refers to ';hills'). Supporting the SPEGG was a SPARQL server that held the ontologies and processed queries on them. Earth Science surveys and forecast always have a place on Earth. Being able to share the information about a place and solve inconsistencies about that place from different sources will enable geoscientists to better do their research. In the advent of mobile geo computing and location based services (LBS), brokering gazetteers will provide geoscientists with access to gazetteer services rich with information and functionality beyond that offered by current generic gazetteers.

  4. Planning chemical syntheses with deep neural networks and symbolic AI

    NASA Astrophysics Data System (ADS)

    Segler, Marwin H. S.; Preuss, Mike; Waller, Mark P.

    2018-03-01

    To plan the syntheses of small organic molecules, chemists use retrosynthesis, a problem-solving technique in which target molecules are recursively transformed into increasingly simpler precursors. Computer-aided retrosynthesis would be a valuable tool but at present it is slow and provides results of unsatisfactory quality. Here we use Monte Carlo tree search and symbolic artificial intelligence (AI) to discover retrosynthetic routes. We combined Monte Carlo tree search with an expansion policy network that guides the search, and a filter network to pre-select the most promising retrosynthetic steps. These deep neural networks were trained on essentially all reactions ever published in organic chemistry. Our system solves for almost twice as many molecules, thirty times faster than the traditional computer-aided search method, which is based on extracted rules and hand-designed heuristics. In a double-blind AB test, chemists on average considered our computer-generated routes to be equivalent to reported literature routes.

  5. Coverage maximization under resource constraints using a nonuniform proliferating random walk.

    PubMed

    Saha, Sudipta; Ganguly, Niloy

    2013-02-01

    Information management services on networks, such as search and dissemination, play a key role in any large-scale distributed system. One of the most desirable features of these services is the maximization of the coverage, i.e., the number of distinctly visited nodes under constraints of network resources as well as time. However, redundant visits of nodes by different message packets (modeled, e.g., as walkers) initiated by the underlying algorithms for these services cause wastage of network resources. In this work, using results from analytical studies done in the past on a K-random-walk-based algorithm, we identify that redundancy quickly increases with an increase in the density of the walkers. Based on this postulate, we design a very simple distributed algorithm which dynamically estimates the density of the walkers and thereby carefully proliferates walkers in sparse regions. We use extensive computer simulations to test our algorithm in various kinds of network topologies whereby we find it to be performing particularly well in networks that are highly clustered as well as sparse.

  6. Climate Model Diagnostic Analyzer

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Pan, Lei; Zhai, Chengxing; Tang, Benyang; Kubar, Terry; Zhang, Zia; Wang, Wei

    2015-01-01

    The comprehensive and innovative evaluation of climate models with newly available global observations is critically needed for the improvement of climate model current-state representation and future-state predictability. A climate model diagnostic evaluation process requires physics-based multi-variable analyses that typically involve large-volume and heterogeneous datasets, making them both computation- and data-intensive. With an exploratory nature of climate data analyses and an explosive growth of datasets and service tools, scientists are struggling to keep track of their datasets, tools, and execution/study history, let alone sharing them with others. In response, we have developed a cloud-enabled, provenance-supported, web-service system called Climate Model Diagnostic Analyzer (CMDA). CMDA enables the physics-based, multivariable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. At the same time, CMDA provides a crowd-sourcing space where scientists can organize their work efficiently and share their work with others. CMDA is empowered by many current state-of-the-art software packages in web service, provenance, and semantic search.

  7. Determining Appropriate Coupling between User Experiences and Earth Science Data Services

    NASA Astrophysics Data System (ADS)

    Moghaddam-Taaheri, E.; Pilone, D.; Newman, D. J.; Mitchell, A. E.; Goff, T. D.; Baynes, K.

    2012-12-01

    NASA's Earth Observing System ClearingHOuse (ECHO) is a format agnostic metadata repository supporting over 3000 collections and 100M granules. ECHO exposes FTP and RESTful Data Ingest APIs in addition to both SOAP and RESTful search and order capabilities. Built on top of ECHO is a human facing search and order web application named Reverb. Reverb exposes ECHO's capabilities through an interactive, Web 2.0 application designed around searching for Earth Science data and downloading or ordering data of interest. ECHO and Reverb have supported the concept of Earth Science data services for several years but only for discovery. Invocation of these services was not a primary capability of the user experience. As more and more Earth Science data moves online and away from the concept of data ordering, progress has been made in making on demand services available for directly accessed data. These concepts have existed through access mechanisms such as OPeNDAP but are proliferating to accommodate a wider variety of services and service providers. Recently, the EOSDIS Service Interface (ESI) was defined and integrated into the ECS system. The ESI allows data providers to expose a wide variety of service capabilities including reprojection, reformatting, spatial and band subsetting, and resampling. ECHO and Reverb were tasked with making these services available to end-users in a meaningful and usable way that integrated into its existing search and ordering workflow. This presentation discusses the challenges associated with exposing disparate service capabilities while presenting a meaningful and cohesive user experience. Specifically, we'll discuss: - Benefits and challenges of tightly coupling the user interface with underlying services - Approaches to generic service descriptions - Approaches to dynamic user interfaces that better describe service capabilities while minimizing application coupling - Challenges associated with traditional WSDL / UDDI style service descriptions - Walkthrough of the solution used by ECHO and Reverb to integrate and expose ESI compliant services to our users

  8. Genetic Local Search for Optimum Multiuser Detection Problem in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Wang, Shaowei; Ji, Xiaoyong

    Optimum multiuser detection (OMD) in direct-sequence code-division multiple access (DS-CDMA) systems is an NP-complete problem. In this paper, we present a genetic local search algorithm, which consists of an evolution strategy framework and a local improvement procedure. The evolution strategy searches the space of feasible, locally optimal solutions only. A fast iterated local search algorithm, which employs the proprietary characteristics of the OMD problem, produces local optima with great efficiency. Computer simulations show the bit error rate (BER) performance of the GLS outperforms other multiuser detectors in all cases discussed. The computation time is polynomial complexity in the number of users.

  9. Patient flow within UK emergency departments: a systematic review of the use of computer simulation modelling methods

    PubMed Central

    Mohiuddin, Syed; Busby, John; Savović, Jelena; Richards, Alison; Northstone, Kate; Hollingworth, William; Donovan, Jenny L; Vasilakis, Christos

    2017-01-01

    Objectives Overcrowding in the emergency department (ED) is common in the UK as in other countries worldwide. Computer simulation is one approach used for understanding the causes of ED overcrowding and assessing the likely impact of changes to the delivery of emergency care. However, little is known about the usefulness of computer simulation for analysis of ED patient flow. We undertook a systematic review to investigate the different computer simulation methods and their contribution for analysis of patient flow within EDs in the UK. Methods We searched eight bibliographic databases (MEDLINE, EMBASE, COCHRANE, WEB OF SCIENCE, CINAHL, INSPEC, MATHSCINET and ACM DIGITAL LIBRARY) from date of inception until 31 March 2016. Studies were included if they used a computer simulation method to capture patient progression within the ED of an established UK National Health Service hospital. Studies were summarised in terms of simulation method, key assumptions, input and output data, conclusions drawn and implementation of results. Results Twenty-one studies met the inclusion criteria. Of these, 19 used discrete event simulation and 2 used system dynamics models. The purpose of many of these studies (n=16; 76%) centred on service redesign. Seven studies (33%) provided no details about the ED being investigated. Most studies (n=18; 86%) used specific hospital models of ED patient flow. Overall, the reporting of underlying modelling assumptions was poor. Nineteen studies (90%) considered patient waiting or throughput times as the key outcome measure. Twelve studies (57%) reported some involvement of stakeholders in the simulation study. However, only three studies (14%) reported on the implementation of changes supported by the simulation. Conclusions We found that computer simulation can provide a means to pretest changes to ED care delivery before implementation in a safe and efficient manner. However, the evidence base is small and poorly developed. There are some methodological, data, stakeholder, implementation and reporting issues, which must be addressed by future studies. PMID:28487459

  10. 36 CFR § 404.7 - Fees to be charged-general.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operating statutory-based fee schedule programs (see definition in § 404.6(b)), such as the NTIS, ABMC...(s) making the search. (b) Computer searches for records. ABMC will charge at the actual direct cost.... For copies prepared by computer, such as tapes or printouts, ABMC shall charge the actual cost...

  11. 5 CFR 1303.40 - Fees to be charged-general.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... distribution by agencies operating statutory-based fee schedule programs (see definition in Sections 1303.30(b... percent) of the employee(s) making the search. (b) Computer searches for records. OMB will charge at the... page. For copies prepared by computer, such as tapes or printouts, OMB shall charge the actual cost...

  12. 5 CFR 1303.40 - Fees to be charged-general.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... distribution by agencies operating statutory-based fee schedule programs (see definition in Sections 1303.30(b... percent) of the employee(s) making the search. (b) Computer searches for records. OMB will charge at the... page. For copies prepared by computer, such as tapes or printouts, OMB shall charge the actual cost...

  13. 5 CFR 1303.40 - Fees to be charged-general.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... distribution by agencies operating statutory-based fee schedule programs (see definition in Sections 1303.30(b... percent) of the employee(s) making the search. (b) Computer searches for records. OMB will charge at the... page. For copies prepared by computer, such as tapes or printouts, OMB shall charge the actual cost...

  14. 36 CFR 404.7 - Fees to be charged-general.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... operating statutory-based fee schedule programs (see definition in § 404.6(b)), such as the NTIS, ABMC...(s) making the search. (b) Computer searches for records. ABMC will charge at the actual direct cost.... For copies prepared by computer, such as tapes or printouts, ABMC shall charge the actual cost...

  15. 36 CFR 404.7 - Fees to be charged-general.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... operating statutory-based fee schedule programs (see definition in § 404.6(b)), such as the NTIS, ABMC...(s) making the search. (b) Computer searches for records. ABMC will charge at the actual direct cost.... For copies prepared by computer, such as tapes or printouts, ABMC shall charge the actual cost...

  16. Computer Software for Forestry Technology Curricula. Final Report.

    ERIC Educational Resources Information Center

    Watson, Roy C.; Scobie, Walter R.

    Since microcomputers are being used more and more frequently in the forest products industry in the Pacific Northwest, Green River Community College conducted a project to search for BASIC language computer programs pertaining to forestry, and when possible, to adapt such software for use in teaching forestry technology. The search for applicable…

  17. A computer search for asteroid families

    NASA Technical Reports Server (NTRS)

    Lindblad, Bertil A.

    1992-01-01

    The improved proper elements of 4100 numbered asteroids have been searched for clusterings in a, e, i space using a computer technique based on the D-criterion. A list of 14 dynamical families each with more than 15 members is presented. Quantitative measurements of the density and dimensions in phase space of each family are presented.

  18. DoE Early Career Research Program: Final Report: Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farbin, Amir

    2015-07-15

    This is the final report of for DoE Early Career Research Program Grant Titled "Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics".

  19. 36 CFR § 1250.56 - Fee schedule for NARA operational records.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... professional employee of NARA, the rate is $33 per hour (or fraction thereof) (2) Computer searching. This is the actual cost to NARA of operating the computer and the salary of the operator. When the search is... general legal or policy issues regarding the application of exemptions. (c) Reproduction fees—(1) Self...

  20. Program Design for Retrospective Searches on Large Data Bases

    ERIC Educational Resources Information Center

    Thiel, L. H.; Heaps, H. S.

    1972-01-01

    Retrospective search of large data bases requires development of special techniques for automatic compression of data and minimization of the number of input-output operations to the computer files. The computer program should require a relatively small amount of internal memory. This paper describes the structure of such a program. (9 references)…

  1. Turbocharged molecular discovery of OLED emitters: from high-throughput quantum simulation to highly efficient TADF devices

    NASA Astrophysics Data System (ADS)

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.

    2016-09-01

    Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.

  2. Introducing a New Interface for the Online MagIC Database by Integrating Data Uploading, Searching, and Visualization

    NASA Astrophysics Data System (ADS)

    Jarboe, N.; Minnett, R.; Constable, C.; Koppers, A. A.; Tauxe, L.

    2013-12-01

    The Magnetics Information Consortium (MagIC) is dedicated to supporting the paleomagnetic, geomagnetic, and rock magnetic communities through the development and maintenance of an online database (http://earthref.org/MAGIC/), data upload and quality control, searches, data downloads, and visualization tools. While MagIC has completed importing some of the IAGA paleomagnetic databases (TRANS, PINT, PSVRL, GPMDB) and continues to import others (ARCHEO, MAGST and SECVR), further individual data uploading from the community contributes a wealth of easily-accessible rich datasets. Previously uploading of data to the MagIC database required the use of an Excel spreadsheet using either a Mac or PC. The new method of uploading data utilizes an HTML 5 web interface where the only computer requirement is a modern browser. This web interface will highlight all errors discovered in the dataset at once instead of the iterative error checking process found in the previous Excel spreadsheet data checker. As a web service, the community will always have easy access to the most up-to-date and bug free version of the data upload software. The filtering search mechanism of the MagIC database has been changed to a more intuitive system where the data from each contribution is displayed in tables similar to how the data is uploaded (http://earthref.org/MAGIC/search/). Searches themselves can be saved as a permanent URL, if desired. The saved search URL could then be used as a citation in a publication. When appropriate, plots (equal area, Zijderveld, ARAI, demagnetization, etc.) are associated with the data to give the user a quicker understanding of the underlying dataset. The MagIC database will continue to evolve to meet the needs of the paleomagnetic, geomagnetic, and rock magnetic communities.

  3. Annotating images by mining image search results.

    PubMed

    Wang, Xin-Jing; Zhang, Lei; Li, Xirong; Ma, Wei-Ying

    2008-11-01

    Although it has been studied for years by the computer vision and machine learning communities, image annotation is still far from practical. In this paper, we propose a novel attempt at model-free image annotation, which is a data-driven approach that annotates images by mining their search results. Some 2.4 million images with their surrounding text are collected from a few photo forums to support this approach. The entire process is formulated in a divide-and-conquer framework where a query keyword is provided along with the uncaptioned image to improve both the effectiveness and efficiency. This is helpful when the collected data set is not dense everywhere. In this sense, our approach contains three steps: 1) the search process to discover visually and semantically similar search results, 2) the mining process to identify salient terms from textual descriptions of the search results, and 3) the annotation rejection process to filter out noisy terms yielded by Step 2. To ensure real-time annotation, two key techniques are leveraged-one is to map the high-dimensional image visual features into hash codes, the other is to implement it as a distributed system, of which the search and mining processes are provided as Web services. As a typical result, the entire process finishes in less than 1 second. Since no training data set is required, our approach enables annotating with unlimited vocabulary and is highly scalable and robust to outliers. Experimental results on both real Web images and a benchmark image data set show the effectiveness and efficiency of the proposed algorithm. It is also worth noting that, although the entire approach is illustrated within the divide-and conquer framework, a query keyword is not crucial to our current implementation. We provide experimental results to prove this.

  4. Teaching Non-Recursive Binary Searching: Establishing a Conceptual Framework.

    ERIC Educational Resources Information Center

    Magel, E. Terry

    1989-01-01

    Discusses problems associated with teaching non-recursive binary searching in computer language classes, and describes a teacher-directed dialog based on dictionary use that helps students use their previous searching experiences to conceptualize the binary search process. Algorithmic development is discussed and appropriate classroom discussion…

  5. National Center for Biotechnology Information

    MedlinePlus

    ... Splign Vector Alignment Search Tool (VAST) All Data & Software Resources... Domains & Structures BioSystems Cn3D Conserved Domain Database (CDD) Conserved Domain Search Service (CD Search) Structure (Molecular Modeling Database) Vector Alignment ...

  6. Medical overuse in the Iranian healthcare system: a systematic review protocol.

    PubMed

    Arab-Zozani, Morteza; Pezeshki, Mohammad Zakaria; Khodayari-Zarnaq, Rahim; Janati, Ali

    2018-04-17

    Lack of resources is one of the main problems of all healthcare systems. Recent studies have shown that reducing the overuse of medical services plays an important role in reducing healthcare system costs. Overuse of medical services is a major problem in the healthcare system, and it threatens the quality of the services, can harm patients and create excess costs for patients. So far, few studies have been conducted in this regard in Iran. The main objective of this systematic review is to perform an inclusive search for studies that report overuse of medical services in the Iranian healthcare system. An extensive search of the literature will be conducted in six databases including PubMed, Embase, Scopus, Web of Science, Cochrane and Scientific Information Database using a comprehensive search strategy to identify studies on overuse of medical care. The search will be done without time limit until the end of 2017, completed by reference tracking, author tracking and expert consultation. The search will be conducted on 1 February 2018. Any study that reports an overuse in a service based on a specific standard will be included in the study. Two reviewers will screen the articles based on the title, abstract and full text, and extract data about type of service, clinical area and overuse rate. Quality appraisal will be assessed using the Joanna Briggs Institute checklist. Potential discrepancies will be resolved by consulting a third author. Recommendations will be made to the Iranian MOHME (Ministry of Health and Medical Education) in order to make better evidence-based decisions about medical services in the future. CRD42017075481. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  7. SETI with Help from Five Million Volunteers: The Berkeley SETI Efforts

    NASA Astrophysics Data System (ADS)

    Korpela, E. J.; Anderson, D. P.; Bankay, R.; Cobb, J.; Foster, G.; Howard, A.; Lebofsky, M.; Marcy, G.; Parsons, A.; Siemion, A.; von Korff, J.; Werthimer, D.; Douglas, K. A.

    2009-12-01

    We summarize radio and optical SETI programs based at the University of California, Berkeley. The ongoing SERENDIP V sky survey searches for radio signals at the 300 meter Arecibo Observatory. The currently installed configuration supports 128 million channels over a 200 MHz bandwidth with 1.6 Hz spectral resolution. Frequency stepping allows the spectrometer to cover the full 300 MHz band of the Arecibo L-band receivers. The final configuration will allow data from all 14 receivers in the Arecibo L-band Focal Array to be monitored simultaneously with over 1.8 billion simultaneous channels. SETI@home uses desktop computers volunteers to analyze over 100 TB of at taken at Arecibo. Over 5 million volunteers have run SETI@home during its 10 year history. The SETI@home sky survey is 10 times more sensitive than SERENDIP V but it covers only a 2.5 MHz band, centered on 1420 MHz. SETI@home searches a much wider parameter space, including 14 octaves of signal bandwidth and 15 octaves of pulse period with Doppler drift corrections from -100 Hz/s to +100 Hz/s. The ASTROPULSE project is the first SETI search for μs time scale pulses in the radio spectrum. Because short pulses are dispersed by the interstellar medium, and amount of dispersion is unknown, ASTROPULSE must search through 30,000 possible dispersions. Substantial computing power is required to conduct this search, so the project will use volunteers and their personal computers to carry out the computation (using distributed computing similar to SETI@home). The SEVENDIP optical pulse search looks for ns time scale pulses at visible wavelengths. It utilizes an automated 30 inch telescope, three ultra fast photo multiplier tubes and a coincidence detector. The target list includes F,G,K and M stars, globular cluster and galaxies.

  8. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    PubMed Central

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2012-01-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards. PMID:22347787

  9. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    PubMed

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  10. 20 CFR 628.804 - Authorized services.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... exposure to work and the requirements for successful job retention. (2) A limited internship should be... THE JOB TRAINING PARTNERSHIP ACT Youth Training Program § 628.804 Authorized services. (a) The SDA and... participant (section 264(d)(3)(A)). (e) The provision of work experience, job search assistance, job search...

  11. Recreational Water Illness (RWI) - Infectious Disease Epidemiology Program

    Science.gov Websites

    & Prevention A Division of the Maine Department of Health and Human Services Contact EPI | News | Online services | Publications | Subject index Search EPI Search Maine CDC Home Health Topics A-Z Data /Reports For Health Care Providers For Businesses For Homeowners/Renters Divisions/Programs + A | - A

  12. Future of Department of Defense Cloud Computing Amid Cultural Confusion

    DTIC Science & Technology

    2013-03-01

    enterprise cloud - computing environment and transition to a public cloud service provider. Services have started the development of individual cloud - computing environments...endorsing cloud computing . It addresses related issues in matters of service culture changes and how strategic leaders will dictate the future of cloud ...through data center consolidation and individual Service provided cloud computing .

  13. 34 CFR 361.48 - Scope of vocational rehabilitation services for individuals with disabilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... services for individuals who are blind. (l) Job-related services, including job search and placement assistance, job retention services, follow-up services, and follow-along services. (m) Supported employment...

  14. OpenSearch technology for geospatial resources discovery

    NASA Astrophysics Data System (ADS)

    Papeschi, Fabrizio; Enrico, Boldrini; Mazzetti, Paolo

    2010-05-01

    In 2005, the term Web 2.0 has been coined by Tim O'Reilly to describe a quickly growing set of Web-based applications that share a common philosophy of "mutually maximizing collective intelligence and added value for each participant by formalized and dynamic information sharing". Around this same period, OpenSearch a new Web 2.0 technology, was developed. More properly, OpenSearch is a collection of technologies that allow publishing of search results in a format suitable for syndication and aggregation. It is a way for websites and search engines to publish search results in a standard and accessible format. Due to its strong impact on the way the Web is perceived by users and also due its relevance for businesses, Web 2.0 has attracted the attention of both mass media and the scientific community. This explosive growth in popularity of Web 2.0 technologies like OpenSearch, and practical applications of Service Oriented Architecture (SOA) resulted in an increased interest in similarities, convergence, and a potential synergy of these two concepts. SOA is considered as the philosophy of encapsulating application logic in services with a uniformly defined interface and making these publicly available via discovery mechanisms. Service consumers may then retrieve these services, compose and use them according to their current needs. A great degree of similarity between SOA and Web 2.0 may be leading to a convergence between the two paradigms. They also expose divergent elements, such as the Web 2.0 support to the human interaction in opposition to the typical SOA machine-to-machine interaction. According to these considerations, the Geospatial Information (GI) domain, is also moving first steps towards a new approach of data publishing and discovering, in particular taking advantage of the OpenSearch technology. A specific GI niche is represented by the OGC Catalog Service for Web (CSW) that is part of the OGC Web Services (OWS) specifications suite, which provides a set of services for discovery, access, and processing of geospatial resources in a SOA framework. GI-cat is a distributed CSW framework implementation developed by the ESSI Lab of the Italian National Research Council (CNR-IMAA) and the University of Florence. It provides brokering and mediation functionalities towards heterogeneous resources and inventories, exposing several standard interfaces for query distribution. This work focuses on a new GI-cat interface which allows the catalog to be queried according to the OpenSearch syntax specification, thus filling the gap between the SOA architectural design of the CSW and the Web 2.0. At the moment, there is no OGC standard specification about this topic, but an official change request has been proposed in order to enable the OGC catalogues to support OpenSearch queries. In this change request, an OpenSearch extension is proposed providing a standard mechanism to query a resource based on temporal and geographic extents. Two new catalog operations are also proposed, in order to publish a suitable OpenSearch interface. This extended interface is implemented by the modular GI-cat architecture adding a new profiling module called "OpenSearch profiler". Since GI-cat also acts as a clearinghouse catalog, another component called "OpenSearch accessor" is added in order to access OpenSearch compliant services. An important role in the GI-cat extension, is played by the adopted mapping strategy. Two different kind of mappings are required: query, and response elements mapping. Query mapping is provided in order to fit the simple OpenSearch query syntax to the complex CSW query expressed by the OGC Filter syntax. GI-cat internal data model is based on the ISO-19115 profile, that is more complex than the simple XML syndication formats, such as RSS 2.0 and Atom 1.0, suggested by OpenSearch. Once response elements are available, in order to be presented, they need to be translated from the GI-cat internal data model, to the above mentioned syndication formats; the mapping processing, is bidirectional. When GI-cat is used to access OpenSearch compliant services, the CSW query must be mapped to the OpenSearch query, and the response elements, must be translated according to the GI-cat internal data model. As results of such extensions, GI-cat provides a user friendly facade to the complex CSW interface, thus enabling it to be queried, for example, using a browser toolbar.

  15. Health literacy and usability of clinical trial search engines.

    PubMed

    Utami, Dina; Bickmore, Timothy W; Barry, Barbara; Paasche-Orlow, Michael K

    2014-01-01

    Several web-based search engines have been developed to assist individuals to find clinical trials for which they may be interested in volunteering. However, these search engines may be difficult for individuals with low health and computer literacy to navigate. The authors present findings from a usability evaluation of clinical trial search tools with 41 participants across the health and computer literacy spectrum. The study consisted of 3 parts: (a) a usability study of an existing web-based clinical trial search tool; (b) a usability study of a keyword-based clinical trial search tool; and (c) an exploratory study investigating users' information needs when deciding among 2 or more candidate clinical trials. From the first 2 studies, the authors found that users with low health literacy have difficulty forming queries using keywords and have significantly more difficulty using a standard web-based clinical trial search tool compared with users with adequate health literacy. From the third study, the authors identified the search factors most important to individuals searching for clinical trials and how these varied by health literacy level.

  16. Status of the UC-Berkeley SETI efforts

    NASA Astrophysics Data System (ADS)

    Korpela, E. J.; Anderson, D. P.; Bankay, R.; Cobb, J.; Howard, A.; Lebofsky, M.; Siemion, A. P. V.; von Korff, J.; Werthimer, D.

    2011-10-01

    We summarize radio and optical SETI programs based at the University of California, Berkeley. The SEVENDIP optical pulse search looks for ns time scale pulses at visible wavelengths. It utilizes an automated 30 inch telescope, three ultra fast photo multiplier tubes and a coincidence detector. The target list includes F, G, K and M stars, globular cluster and galaxies. The ongoing SERENDIP V.v sky survey searches for radio signals at the 300 meter Arecibo Observatory. The currently installed configuration supports 128 million channels over a 200 MHz bandwidth with ~1.6 Hz spectral resolution. Frequency stepping allows the spectrometer to cover the full 300MHz band of the Arecibo L-band receivers. The final configuration will allow data from all 14 receivers in the Arecibo L-band Focal Array to be monitored simultaneously with over 1.8 billion channels. SETI@home uses the desktop computers of volunteers to analyze over 160 TB of data at taken at Arecibo. Over 6 million volunteers have run SETI@home during its 10 year history. The SETI@home sky survey is 10 times more sensitive than SERENDIP V.v but it covers only a 2.5 MHz band, centered on 1420 MHz. SETI@home searches a much wider parameter space, including 14 octaves of signal bandwidth and 15 octaves of pulse period with Doppler drift corrections from -100 Hz/s to +100 Hz/s. SETI@home is being expanded to analyze data collected during observations of Kepler objects of interest in May 2011. The Astropulse project is the first SETI search for μs time scale pulses in the radio spectrum. Because short pulses are dispersed by the interstellar medium, and the amount of dispersion is unknown, Astropulse must search through 30,000 possible dispersions. Substantial computing power is required to conduct this search, so the project uses volunteers and their personal computers to carry out the computation (using distributed computing similar to SETI@home). Keywords: radio instrumentation, FPGA spectrometers, SETI, optical SETI, Search for Extraterrestrial Intelligence, volunteer computing, radio transients, optical transients.

  17. Optimal search strategies for detecting health services research studies in MEDLINE

    PubMed Central

    Wilczynski, Nancy L.; Haynes, R. Brian; Lavis, John N.; Ramkissoonsingh, Ravi; Arnold-Oatley, Alexandra E.

    2004-01-01

    Background Evidence from health services research (HSR) is currently thinly spread through many journals, making it difficult for health services researchers, managers and policy-makers to find research on clinical practice guidelines and the appropriateness, process, outcomes, cost and economics of health care services. We undertook to develop and test search terms to retrieve from the MEDLINE database HSR articles meeting minimum quality standards. Methods The retrieval performance of 7445 methodologic search terms and phrases in MEDLINE (the test) were compared with a hand search of the literature (the gold standard) for each issue of 68 journal titles for the year 2000 (a total of 25 936 articles). We determined sensitivity, specificity and precision (the positive predictive value) of the MEDLINE search strategies. Results A majority of the articles that were classified as outcome assessment, but fewer than half of those in the other categories, were considered methodologically acceptable (no methodologic criteria were applied for cost studies). Combining individual search terms to maximize sensitivity, while keeping specificity at 50% or more, led to sensitivities in the range of 88.1% to 100% for several categories (specificities ranged from 52.9% to 97.4%). When terms were combined to maximize specificity while keeping sensitivity at 50% or more, specificities of 88.8% to 99.8% were achieved. When terms were combined to maximize sensitivity and specificity while minimizing the differences between the 2 measurements, most strategies for HSR categories achieved sensitivity and specificity of at least 80%. Interpretation Sensitive and specific search strategies were validated for retrieval of HSR literature from MEDLINE. These strategies have been made available for public use by the US National Library of Medicine at www.nlm.nih.gov/nichsr/hedges/search.html. PMID:15534310

  18. Mercury: Reusable software application for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2009-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury is itself a reusable toolset for metadata, with current use in 12 different projects. Mercury also supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects To balance these common and project-specific needs, Mercury’s architecture includes three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of configuration files. The harvested files are then passed to the Indexing system, where each of the fields in these structured metadata records are indexed properly, so that the query engine can perform simple, keyword, spatial and temporal searches across these metadata sources. The search user interface software has two API categories; a common core API which is used by all the Mercury user interfaces for querying the index and a customized API for project specific user interfaces. For our work in producing a reusable, portable, robust, feature-rich application, Mercury received a 2008 NASA Earth Science Data Systems Software Reuse Working Group Peer-Recognition Software Reuse Award. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  19. Aplastic Anemia and Myelodysplastic Syndromes

    MedlinePlus

    Skip to main content U.S. Department of Health and Human Services Follow us: Search Menu Search for Information from NIDDK Entire Site Research & Funding Health Information About NIDDK News Search Research & Funding Current ...

  20. Parallel Computational Protein Design.

    PubMed

    Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang

    2017-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.

  1. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

    PubMed Central

    Schmidhuber, Jürgen

    2013-01-01

    Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771

  2. Building a Propulsion Experiment Project Management Environment

    NASA Technical Reports Server (NTRS)

    Keiser, Ken; Tanner, Steve; Hatcher, Danny; Graves, Sara

    2004-01-01

    What do you get when you cross rocket scientists with computer geeks? It is an interactive, distributed computing web of tools and services providing a more productive environment for propulsion research and development. The Rocket Engine Advancement Program 2 (REAP2) project involves researchers at several institutions collaborating on propulsion experiments and modeling. In an effort to facilitate these collaborations among researchers at different locations and with different specializations, researchers at the Information Technology and Systems Center,' University of Alabama in Huntsville, are creating a prototype web-based interactive information system in support of propulsion research. This system, to be based on experience gained in creating similar systems for NASA Earth science field experiment campaigns such as the Convection and Moisture Experiments (CAMEX), will assist in the planning and analysis of model and experiment results across REAP2 participants. The initial version of the Propulsion Experiment Project Management Environment (PExPM) consists of a controlled-access web portal facilitating the drafting and sharing of working documents and publications. Interactive tools for building and searching an annotated bibliography of publications related to REAP2 research topics have been created to help organize and maintain the results of literature searches. Also work is underway, with some initial prototypes in place, for interactive project management tools allowing project managers to schedule experiment activities, track status and report on results. This paper describes current successes, plans, and expected challenges for this project.

  3. Occupational health profile of workers employed in the manufacturing sector of India.

    PubMed

    Suri, Shivali; Das, Ranjan

    2016-01-01

    The occupational health scenario of workers engaged in the manufacturing sector in India deserves attention for their safety and increasing productivity. We reviewed the status of the manufacturing sector, identified hazards faced by workers, and assessed the existing legislations and healthcare delivery mechanisms. From October 2014 to March 2015, we did a literature review by manual search of pre-identified journals, general electronic search, electronic search of dedicated websites/databases and personal communication with experts of occupational health. An estimated 115 million workers are engaged in the manufacturing sector, though the Labour Bureau takes into account only one-tenth of them who work in factories registered with the government. Most reports do not mention the human capital employed neither their quality of life, nor occupational health services available. The incidence of accidents were documented till 2011, and industry-wise break up of data is not available. Occupational hazards reported include hypertension, stress, liver disease, diabetes, tuberculosis, eye/ hearing problems, cancers, etc. We found no studies for manufacturing industries in glass, tobacco, computer and allied products, etc. The incidence of accidents is decreasing but the proportion of fatalities is increasing. Multiple legislations exist which cover occupational health, but most of these are old and have not been amended adequately to reflect the present situation. There is a shortage of manpower and occupational health statistics for dealing with surveillance, prevention and regulation in this sector. There is an urgent need of a modern occupational health legislation and an effective machinery to enforce it, preferably through intersectoral coordination between the Employees' State Insurance Corporation, factories and state governments. Occupational health should be integrated with the general health services.

  4. A content analysis of electronic cigarette manufacturer websites in China.

    PubMed

    Yao, Tingting; Jiang, Nan; Grana, Rachel; Ling, Pamela M; Glantz, Stanton A

    2016-03-01

    The goal of this study was to summarise the websites of electronic cigarette (e-cigarette) manufacturers in China and describe how they market their products. From March to April 2013, we used two search keywords 'electronic cigarette' (Dian Zi Xiang Yan in Chinese) and 'manufacturer' (Sheng Chan Chang Jia in Chinese) to search e-cigarette manufacturers in China on Alibaba, an internet-based e-commerce business that covers business-to-business online marketplaces, retail and payment platforms, shopping search engine and data-centric cloud computing services. A total of 18 websites of 12 e-cigarette manufacturers in China were analysed by using a coding guide which includes 14 marketing claims. Health-related benefits were claimed most frequently (89%), followed by the claims of no secondhand smoke (SHS) exposure (78%), and utility for smoking cessation (67%). A wide variety of flavours, celebrity endorsements and e-cigarettes specifically for women were presented. None of the websites had any age restriction on access, references to government regulation or lawsuits. Instruction on how to use e-cigarettes was on 17% of the websites. Better regulation of e-cigarette marketing messages on manufacturers' websites is needed in China. The frequent claims of health benefits, smoking cessation, strategies appealing to youth and women are concerning, especially targeting women. Regulators should prohibit marketing claims of health benefits, no SHS exposure and value for smoking cessation in China until health-related, quality and safety issues have been adequately addressed. To avoid e-cigarette use for initiation to nicotine addiction, messages targeting youth and women should be prohibited. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. 22 CFR 1304.9 - Fees.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... of producing the documents. (2) Searches—(i) Manual searches. Search fees will be assessed at the rate of $25.30 per hour. Charges for search time less than a full hour will be in increments of quarter hours. (ii) Computer searches. The FOIA Officer will charge the actual direct costs of conducting...

  6. 22 CFR 1304.9 - Fees.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... of producing the documents. (2) Searches—(i) Manual searches. Search fees will be assessed at the rate of $25.30 per hour. Charges for search time less than a full hour will be in increments of quarter hours. (ii) Computer searches. The FOIA Officer will charge the actual direct costs of conducting...

  7. 22 CFR 1304.9 - Fees.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... of producing the documents. (2) Searches—(i) Manual searches. Search fees will be assessed at the rate of $25.30 per hour. Charges for search time less than a full hour will be in increments of quarter hours. (ii) Computer searches. The FOIA Officer will charge the actual direct costs of conducting...

  8. Full Text Searching and Customization in the NASA ADS Abstract Service

    NASA Technical Reports Server (NTRS)

    Eichhorn, G.; Accomazzi, A.; Grant, C. S.; Kurtz, M. J.; Henneken, E. A.; Thompson, D. M.; Murray, S. S.

    2004-01-01

    The NASA-ADS Abstract Service provides a sophisticated search capability for the literature in Astronomy, Planetary Sciences, Physics/Geophysics, and Space Instrumentation. The ADS is funded by NASA and access to the ADS services is free to anybody worldwide without restrictions. It allows the user to search the literature by author, title, and abstract text. The ADS database contains over 3.6 million references, with 965,000 in the Astronomy/Planetary Sciences database, and 1.6 million in the Physics/Geophysics database. 2/3 of the records have full abstracts, the rest are table of contents entries (titles and author lists only). The coverage for the Astronomy literature is better than 95% from 1975. Before that we cover all major journals and many smaller ones. Most of the journal literature is covered back to volume 1. We now get abstracts on a regular basis from most journals. Over the last year we have entered basically all conference proceedings tables of contents that are available at the Harvard Smithsonian Center for Astrophysics library. This has greatly increased the coverage of conference proceedings in the ADS. The ADS also covers the ArXiv Preprints. We download these preprints every night and index all the preprints. They can be searched either together with the other abstracts or separately. There are currently about 260,000 preprints in that database. In January 2004 we have introduced two new services, full text searching and a personal notification service called "myADS". As all other ADS services, these are free to use for anybody.

  9. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  10. Irreconcilable difference between quantum walks and adiabatic quantum computing

    NASA Astrophysics Data System (ADS)

    Wong, Thomas G.; Meyer, David A.

    2016-06-01

    Continuous-time quantum walks and adiabatic quantum evolution are two general techniques for quantum computing, both of which are described by Hamiltonians that govern their evolutions by Schrödinger's equation. In the former, the Hamiltonian is fixed, while in the latter, the Hamiltonian varies with time. As a result, their formulations of Grover's algorithm evolve differently through Hilbert space. We show that this difference is fundamental; they cannot be made to evolve along each other's path without introducing structure more powerful than the standard oracle for unstructured search. For an adiabatic quantum evolution to evolve like the quantum walk search algorithm, it must interpolate between three fixed Hamiltonians, one of which is complex and introduces structure that is stronger than the oracle for unstructured search. Conversely, for a quantum walk to evolve along the path of the adiabatic search algorithm, it must be a chiral quantum walk on a weighted, directed star graph with structure that is also stronger than the oracle for unstructured search. Thus, the two techniques, although similar in being described by Hamiltonians that govern their evolution, compute by fundamentally irreconcilable means.

  11. 78 FR 57884 - Recent Trends in U.S. Services Trade, 2014 Annual Report

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-20

    ... on electronic services (audiovisual, computer, and telecommunication services). The Commission is... (audiovisual, computer, and telecommunication services). Under Commission investigation No. 332-345, the... 2014 report will focus on trade in electronic services (audiovisual, computer, and telecommunication...

  12. A Tailored Ontology Supporting Sensor Implementation for the Maintenance of Industrial Machines.

    PubMed

    Maleki, Elaheh; Belkadi, Farouk; Ritou, Mathieu; Bernard, Alain

    2017-09-08

    The longtime productivity of an industrial machine is improved by condition-based maintenance strategies. To do this, the integration of sensors and other cyber-physical devices is necessary in order to capture and analyze a machine's condition through its lifespan. Thus, choosing the best sensor is a critical step to ensure the efficiency of the maintenance process. Indeed, considering the variety of sensors, and their features and performance, a formal classification of a sensor's domain knowledge is crucial. This classification facilitates the search for and reuse of solutions during the design of a new maintenance service. Following a Knowledge Management methodology, the paper proposes and develops a new sensor ontology that structures the domain knowledge, covering both theoretical and experimental sensor attributes. An industrial case study is conducted to validate the proposed ontology and to demonstrate its utility as a guideline to ease the search of suitable sensors. Based on the ontology, the final solution will be implemented in a shared repository connected to legacy CAD (computer-aided design) systems. The selection of the best sensor is, firstly, obtained by the matching of application requirements and sensor specifications (that are proposed by this sensor repository). Then, it is refined from the experimentation results. The achieved solution is recorded in the sensor repository for future reuse. As a result, the time and cost of the design process of new condition-based maintenance services is reduced.

  13. WhatsApp in Clinical Practice: A Literature Review.

    PubMed

    Mars, Maurice; Scott, Richard E

    2016-01-01

    Several spontaneous telemedicine services using WhatsApp Messenger have started in South Africa raising issues of confidentiality, data security and storage, record keeping and reporting. This study reviewed the literature on WhatsApp in clinical practice, to determine how it is used, and users' satisfaction. Pubmed, Scopus, Science Direct and IEE Expert databases were searched using the search term WhatsApp and Google Scholar using the terms WhatsApp Telemedicine and WhatsApp mHealth. Thirty-two papers covering 17 disciplines were relevant with the most papers, 12, from India. Seventeen papers reported the use of WhatsApp Groups within departments, 14 of which were surgery related disciplines. Groups improved communication and advice given on patient management. Confidentiality was mentioned in 19 papers and consent in five. Data security was partially addressed in 11 papers with little understanding of how data are transmitted and stored. Telemedicine services outside of departmental groups were reported in seven papers and covered emergency triage in maxillofacial, plastic, neuro and general surgery, and cardiology and telestroke. WhatsApp is seen to be a simple, cheap and effective means of communication within the clinical health sector and its use will grow. Users have paid little attention to confidentiality, consent and data security. Guidelines for using WhatsApp for telemedicine are required including downloading. WhatsApp messages to computer for integration with electronic medical records.

  14. A blind hierarchical coherent search for gravitational-wave signals from coalescing compact binaries in a network of interferometric detectors

    NASA Astrophysics Data System (ADS)

    Bose, Sukanta; Dayanga, Thilina; Ghosh, Shaon; Talukder, Dipongkar

    2011-07-01

    We describe a hierarchical data analysis pipeline for coherently searching for gravitational-wave signals from non-spinning compact binary coalescences (CBCs) in the data of multiple earth-based detectors. This search assumes no prior information on the sky position of the source or the time of occurrence of its transient signals and, hence, is termed 'blind'. The pipeline computes the coherent network search statistic that is optimal in stationary, Gaussian noise. More importantly, it allows for the computation of a suite of alternative multi-detector coherent search statistics and signal-based discriminators that can improve the performance of CBC searches in real data, which can be both non-stationary and non-Gaussian. Also, unlike the coincident multi-detector search statistics that have been employed so far, the coherent statistics are different in the sense that they check for the consistency of the signal amplitudes and phases in the different detectors with their different orientations and with the signal arrival times in them. Since the computation of coherent statistics entails searching in the sky, it is more expensive than that of the coincident statistics that do not require it. To reduce computational costs, the first stage of the hierarchical pipeline constructs coincidences of triggers from the multiple interferometers, by requiring their proximity in time and component masses. The second stage follows up on these coincident triggers by computing the coherent statistics. Here, we compare the performances of this hierarchical pipeline with and without the second (or coherent) stage in Gaussian noise. Although introducing hierarchy can be expected to cause some degradation in the detection efficiency compared to that of a single-stage coherent pipeline, nevertheless it improves the computational speed of the search considerably. The two main results of this work are as follows: (1) the performance of the hierarchical coherent pipeline on Gaussian data is shown to be better than the pipeline with just the coincident stage; (2) the three-site network of LIGO detectors, in Hanford and Livingston (USA), and Virgo detector in Cascina (Italy) cannot resolve the polarization of waves arriving from certain parts of the sky. This can cause the three-site coherent statistic at those sky positions to become singular. Regularized versions of the statistic can avoid that problem, but can be expected to be sub-optimal. The aforementioned improvement in the pipeline's performance due to the coherent stage is in spite of this handicap.

  15. A group arrival retrial G - queue with multi optional stages of service, orbital search and server breakdown

    NASA Astrophysics Data System (ADS)

    Radha, J.; Indhira, K.; Chandrasekaran, V. M.

    2017-11-01

    A group arrival feedback retrial queue with k optional stages of service and orbital search policy is studied. Any arriving group of customer finds the server free, one from the group enters into the first stage of service and the rest of the group join into the orbit. After completion of the i th stage of service, the customer under service may have the option to choose (i+1)th stage of service with θi probability, with pI probability may join into orbit as feedback customer or may leave the system with {q}i=≤ft\\{\\begin{array}{l}1-{p}i-{θ }i,i=1,2,\\cdots k-1\\ 1-{p}i,i=k\\end{array}\\right\\} probability. Busy server may get to breakdown due to the arrival of negative customers and the service channel will fail for a short interval of time. At the completion of service or repair, the server searches for the customer in the orbit (if any) with probability α or remains idle with probability 1-α. By using the supplementary variable method, steady state probability generating function for system size, some system performance measures are discussed.

  16. Search for Minimal and Semi-Minimal Rule Sets in Incremental Learning of Context-Free and Definite Clause Grammars

    NASA Astrophysics Data System (ADS)

    Imada, Keita; Nakamura, Katsuhiko

    This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.

  17. A sub-space greedy search method for efficient Bayesian Network inference.

    PubMed

    Zhang, Qing; Cao, Yong; Li, Yong; Zhu, Yanming; Sun, Samuel S M; Guo, Dianjing

    2011-09-01

    Bayesian network (BN) has been successfully used to infer the regulatory relationships of genes from microarray dataset. However, one major limitation of BN approach is the computational cost because the calculation time grows more than exponentially with the dimension of the dataset. In this paper, we propose a sub-space greedy search method for efficient Bayesian Network inference. Particularly, this method limits the greedy search space by only selecting gene pairs with higher partial correlation coefficients. Using both synthetic and real data, we demonstrate that the proposed method achieved comparable results with standard greedy search method yet saved ∼50% of the computational time. We believe that sub-space search method can be widely used for efficient BN inference in systems biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. PrimerStation: a highly specific multiplex genomic PCR primer design server for the human genome

    PubMed Central

    Yamada, Tomoyuki; Soma, Haruhiko; Morishita, Shinichi

    2006-01-01

    PrimerStation () is a web service that calculates primer sets guaranteeing high specificity against the entire human genome. To achieve high accuracy, we used the hybridization ratio of primers in liquid solution. Calculating the status of sequence hybridization in terms of the stringent hybridization ratio is computationally costly, and no web service checks the entire human genome and returns a highly specific primer set calculated using a precise physicochemical model. To shorten the response time, we precomputed candidates for specific primers using a massively parallel computer with 100 CPUs (SunFire 15 K) about 3 months in advance. This enables PrimerStation to search and output qualified primers interactively. PrimerStation can select highly specific primers suitable for multiplex PCR by seeking a wider temperature range that minimizes the possibility of cross-reaction. It also allows users to add heuristic rules to the primer design, e.g. the exclusion of single nucleotide polymorphisms (SNPs) in primers, the avoidance of poly(A) and CA-repeats in the PCR products, and the elimination of defective primers using the secondary structure prediction. We performed several tests to verify the PCR amplification of randomly selected primers for ChrX, and we confirmed that the primers amplify specific PCR products perfectly. PMID:16845094

  19. Following the drill: the search for a dentist.

    PubMed

    Motes, W H; Huhmann, B A; Hill, C J

    1995-01-01

    The authors identify strategically useful distinctions between the activities of potential patients in search of specialized vs. routine dental services. Survey findings question the advisability of assuming (1) that what occurs in the search process for routine dental care will automatically be mirrored in the process for more specialized services and (2) that potential patients use the same specific sources of information--both between (e.g., physicians vs. dentists) and within (e.g., specialized dental care vs. routine dental care) existing health care typologies.

  20. The effects of integrating service learning into computer science: an inter-institutional longitudinal study

    NASA Astrophysics Data System (ADS)

    Payton, Jamie; Barnes, Tiffany; Buch, Kim; Rorrer, Audrey; Zuo, Huifang

    2015-07-01

    This study is a follow-up to one published in computer science education in 2010 that reported preliminary results showing a positive impact of service learning on student attitudes associated with success and retention in computer science. That paper described how service learning was incorporated into a computer science course in the context of the Students & Technology in Academia, Research, and Service (STARS) Alliance, an NSF-supported broadening participation in computing initiative that aims to diversify the computer science pipeline through innovative pedagogy and inter-institutional partnerships. The current paper describes how the STARS Alliance has expanded to diverse institutions, all using service learning as a vehicle for broadening participation in computing and enhancing attitudes and behaviors associated with student success. Results supported the STARS model of service learning for enhancing computing efficacy and computing commitment and for providing diverse students with many personal and professional development benefits.

  1. P-HS-SFM: a parallel harmony search algorithm for the reproduction of experimental data in the continuous microscopic crowd dynamic models

    NASA Astrophysics Data System (ADS)

    Jaber, Khalid Mohammad; Alia, Osama Moh'd.; Shuaib, Mohammed Mahmod

    2018-03-01

    Finding the optimal parameters that can reproduce experimental data (such as the velocity-density relation and the specific flow rate) is a very important component of the validation and calibration of microscopic crowd dynamic models. Heavy computational demand during parameter search is a known limitation that exists in a previously developed model known as the Harmony Search-Based Social Force Model (HS-SFM). In this paper, a parallel-based mechanism is proposed to reduce the computational time and memory resource utilisation required to find these parameters. More specifically, two MATLAB-based multicore techniques (parfor and create independent jobs) using shared memory are developed by taking advantage of the multithreading capabilities of parallel computing, resulting in a new framework called the Parallel Harmony Search-Based Social Force Model (P-HS-SFM). The experimental results show that the parfor-based P-HS-SFM achieved a better computational time of about 26 h, an efficiency improvement of ? 54% and a speedup factor of 2.196 times in comparison with the HS-SFM sequential processor. The performance of the P-HS-SFM using the create independent jobs approach is also comparable to parfor with a computational time of 26.8 h, an efficiency improvement of about 30% and a speedup of 2.137 times.

  2. Dicoogle Mobile: a medical imaging platform for Android.

    PubMed

    Viana-Ferreira, Carlos; Ferreira, Daniel; Valente, Frederico; Monteiro, Eriksson; Costa, Carlos; Oliveira, José Luís

    2012-01-01

    Mobile computing technologies are increasingly becoming a valuable asset in healthcare information systems. The adoption of these technologies helps to assist in improving quality of care, increasing productivity and facilitating clinical decision support. They provide practitioners with ubiquitous access to patient records, being actually an important component in telemedicine and tele-work environments. We have developed Dicoogle Mobile, an Android application that provides remote access to distributed medical imaging data through a cloud relay service. Besides, this application has the capability to store and index local imaging data, so that they can also be searched and visualized. In this paper, we will describe Dicoogle Mobile concept as well the architecture of the whole system that makes it running.

  3. Boolean logic tree of graphene-based chemical system for molecular computation and intelligent molecular search query.

    PubMed

    Huang, Wei Tao; Luo, Hong Qun; Li, Nian Bing

    2014-05-06

    The most serious, and yet unsolved, problem of constructing molecular computing devices consists in connecting all of these molecular events into a usable device. This report demonstrates the use of Boolean logic tree for analyzing the chemical event network based on graphene, organic dye, thrombin aptamer, and Fenton reaction, organizing and connecting these basic chemical events. And this chemical event network can be utilized to implement fluorescent combinatorial logic (including basic logic gates and complex integrated logic circuits) and fuzzy logic computing. On the basis of the Boolean logic tree analysis and logic computing, these basic chemical events can be considered as programmable "words" and chemical interactions as "syntax" logic rules to construct molecular search engine for performing intelligent molecular search query. Our approach is helpful in developing the advanced logic program based on molecules for application in biosensing, nanotechnology, and drug delivery.

  4. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Computer services for customers of subsidiary... (REGULATION Y) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for.... (b) The Board understood from the facts presented that the service company owns a computer which it...

  5. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Computer services for customers of subsidiary... (REGULATION Y) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for.... (b) The Board understood from the facts presented that the service company owns a computer which it...

  6. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 3 2012-01-01 2012-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to furnish...

  7. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to furnish...

  8. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to furnish...

  9. OceanXtremes: Scalable Anomaly Detection in Oceanographic Time-Series

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Armstrong, E. M.; Chin, T. M.; Gill, K. M.; Greguska, F. R., III; Huang, T.; Jacob, J. C.; Quach, N.

    2016-12-01

    The oceanographic community must meet the challenge to rapidly identify features and anomalies in complex and voluminous observations to further science and improve decision support. Given this data-intensive reality, we are developing an anomaly detection system, called OceanXtremes, powered by an intelligent, elastic Cloud-based analytic service backend that enables execution of domain-specific, multi-scale anomaly and feature detection algorithms across the entire archive of 15 to 30-year ocean science datasets.Our parallel analytics engine is extending the NEXUS system and exploits multiple open-source technologies: Apache Cassandra as a distributed spatial "tile" cache, Apache Spark for in-memory parallel computation, and Apache Solr for spatial search and storing pre-computed tile statistics and other metadata. OceanXtremes provides these key capabilities: Parallel generation (Spark on a compute cluster) of 15 to 30-year Ocean Climatologies (e.g. sea surface temperature or SST) in hours or overnight, using simple pixel averages or customizable Gaussian-weighted "smoothing" over latitude, longitude, and time; Parallel pre-computation, tiling, and caching of anomaly fields (daily variables minus a chosen climatology) with pre-computed tile statistics; Parallel detection (over the time-series of tiles) of anomalies or phenomena by regional area-averages exceeding a specified threshold (e.g. high SST in El Nino or SST "blob" regions), or more complex, custom data mining algorithms; Shared discovery and exploration of ocean phenomena and anomalies (facet search using Solr), along with unexpected correlations between key measured variables; Scalable execution for all capabilities on a hybrid Cloud, using our on-premise OpenStack Cloud cluster or at Amazon. The key idea is that the parallel data-mining operations will be run "near" the ocean data archives (a local "network" hop) so that we can efficiently access the thousands of files making up a three decade time-series. The presentation will cover the architecture of OceanXtremes, parallelization of the climatology computation and anomaly detection algorithms using Spark, example results for SST and other time-series, and parallel performance metrics.

  10. Computer-based Interactive Literature Searching for CSU-Chico Chemistry Students.

    ERIC Educational Resources Information Center

    Cooke, Ron C.; And Others

    The intent of this instructional manual, which is aimed at exploring the literature of a discipline and presented in a self-paced, course segment format applicable to any course content, is to enable college students to conduct computer-based interactive searches through multiple databases. The manual is divided into 10 chapters: (1) Introduction,…

  11. The University of South Carolina: College and University Computing Environment.

    ERIC Educational Resources Information Center

    CAUSE/EFFECT, 1987

    1987-01-01

    Both academic and administrative computing as well as network and communications services for the university are provided and supported by the Computer Services Division. Academic services, administrative services, systems engineering and database administration, communications, networking services, operations, and library technologies are…

  12. OSTI.GOV | OSTI, US Dept of Energy Office of Scientific and Technical

    Science.gov Websites

    Information Skip to main content ☰ Submit Research Results Search Tools Public Access Policy Data Services & Dev Tools About FAQs News Sign In Create Account Sign In Create Account Department Information Search terms: Advanced search options Advanced Search OptionsAdvanced Search queries use a

  13. End User Information Searching on the Internet: How Do Users Search and What Do They Search For? (SIG USE)

    ERIC Educational Resources Information Center

    Saracevic, Tefko

    2000-01-01

    Summarizes a presentation that discussed findings and implications of research projects using an Internet search service and Internet-accessible vendor databases, representing the two sides of public database searching: query formulation and resource utilization. Presenters included: Tefko Saracevic, Amanda Spink, Dietmar Wolfram and Hong Xie.…

  14. Supervised learning of tools for content-based search of image databases

    NASA Astrophysics Data System (ADS)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  15. Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.

    PubMed

    Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas

    2017-07-24

    Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.

  16. Ocean Drilling Program: Publication Services: Online Manuscript Submission

    Science.gov Websites

    products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP/TAMU Science Operator Home ODP's main web site Publications Policy Author Instructions Scientific Results Manuscript use the submission and review forms available on the IODP-USIO publications web site. ODP | Search

  17. SAO/NASA ADS at SAO: ADS Abstract Service

    Science.gov Websites

    Service provides a gateway to the online Astronomy and Physics literature. You can navigate this content filtering options as well as visualizations. Astronomy and Astrophysics Classic Search, an legacy interface which searches the 2,311,600 records currently in the Astronomy database, including 198,834 abstracts

  18. Choosing a Database for Social Work: A Comparison of Social Work Abstracts and Social Service Abstracts

    ERIC Educational Resources Information Center

    Flatley, Robert K.; Lilla, Rick; Widner, Jack

    2007-01-01

    This study compared Social Work Abstracts and Social Services Abstracts databases in terms of indexing, journal coverage, and searches. The authors interviewed editors, analyzed journal coverage, and compared searches. It was determined that the databases complement one another more than compete. The authors conclude with some considerations.

  19. Marketing: Marketing 101 for One-on-One

    ERIC Educational Resources Information Center

    Germain, Carol Anne; Bergman, Elaine Lasda

    2006-01-01

    There are occasions at a busy reference desk when contact time with patrons is limited. A solution to this is to create a mediated reference search service where librarians can have the luxury of conducting extended reference interviews. One-on-one searching services can be very beneficial for library patrons, including students, staff,…

  20. 42 CFR 35.17 - Fees and charges for copying, certification, search of records and related services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... OF HEALTH AND HUMAN SERVICES MEDICAL CARE AND EXAMINATIONS HOSPITAL AND STATION MANAGEMENT General... clinical record or other document (through use of facility equipment): (a) Processing (searching, preparation of record and use of equipment), first page $3.25 (b) Each additional page .25 (2) Certification...

Top