Sample records for engineering web applications

  1. Reverse Engineering and Software Products Reuse to Teach Collaborative Web Portals: A Case Study with Final-Year Computer Science Students

    ERIC Educational Resources Information Center

    Medina-Dominguez, Fuensanta; Sanchez-Segura, Maria-Isabel; Mora-Soto, Arturo; Amescua, Antonio

    2010-01-01

    The development of collaborative Web applications does not follow a software engineering methodology. This is because when university students study Web applications in general, and collaborative Web portals in particular, they are not being trained in the use of software engineering techniques to develop collaborative Web portals. This paper…

  2. The Use of Web Search Engines in Information Science Research.

    ERIC Educational Resources Information Center

    Bar-Ilan, Judit

    2004-01-01

    Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…

  3. Engineering Analysis Using a Web-based Protocol

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.; Claus, Russell W.

    2002-01-01

    This paper reviews the development of a web-based framework for engineering analysis. A one-dimensional, high-speed analysis code called LAPIN was used in this study, but the approach can be generalized to any engineering analysis tool. The web-based framework enables users to store, retrieve, and execute an engineering analysis from a standard web-browser. We review the encapsulation of the engineering data into the eXtensible Markup Language (XML) and various design considerations in the storage and retrieval of application data.

  4. MIRASS: medical informatics research activity support system using information mashup network.

    PubMed

    Kiah, M L M; Zaidan, B B; Zaidan, A A; Nabi, Mohamed; Ibraheem, Rabiu

    2014-04-01

    The advancement of information technology has facilitated the automation and feasibility of online information sharing. The second generation of the World Wide Web (Web 2.0) enables the collaboration and sharing of online information through Web-serving applications. Data mashup, which is considered a Web 2.0 platform, plays an important role in information and communication technology applications. However, few ideas have been transformed into education and research domains, particularly in medical informatics. The creation of a friendly environment for medical informatics research requires the removal of certain obstacles in terms of search time, resource credibility, and search result accuracy. This paper considers three glitches that researchers encounter in medical informatics research; these glitches include the quality of papers obtained from scientific search engines (particularly, Web of Science and Science Direct), the quality of articles from the indices of these search engines, and the customizability and flexibility of these search engines. A customizable search engine for trusted resources of medical informatics was developed and implemented through data mashup. Results show that the proposed search engine improves the usability of scientific search engines for medical informatics. Pipe search engine was found to be more efficient than other engines.

  5. A Software Engineering Approach based on WebML and BPMN to the Mediation Scenario of the SWS Challenge

    NASA Astrophysics Data System (ADS)

    Brambilla, Marco; Ceri, Stefano; Valle, Emanuele Della; Facca, Federico M.; Tziviskou, Christina

    Although Semantic Web Services are expected to produce a revolution in the development of Web-based systems, very few enterprise-wide design experiences are available; one of the main reasons is the lack of sound Software Engineering methods and tools for the deployment of Semantic Web applications. In this chapter, we present an approach to software development for the Semantic Web based on classical Software Engineering methods (i.e., formal business process development, computer-aided and component-based software design, and automatic code generation) and on semantic methods and tools (i.e., ontology engineering, semantic service annotation and discovery).

  6. Engineering Laboratory Instruction in Virtual Environment--"eLIVE"

    ERIC Educational Resources Information Center

    Chaturvedi, Sushil; Prabhakaran, Ramamurthy; Yoon, Jaewan; Abdel-Salam, Tarek

    2011-01-01

    A novel application of web-based virtual laboratories to prepare students for physical experiments is explored in some detail. The pedagogy of supplementing physical laboratory with web-based virtual laboratories is implemented by developing a web-based tool, designated in this work as "eLIVE", an acronym for Engineering Laboratory…

  7. Advanced Visualization and Interactive Display Rapid Innovation and Discovery Evaluation Research Program task 8: Survey of WEBGL Graphics Engines

    DTIC Science & Technology

    2015-01-01

    1 3.0 Methods, Assumptions, and Procedures ...18 4.6.3. LineUp Web... Procedures A search of the internet looking at web sites specializing in graphics, graphics engines, web browser applications, and games was conducted to

  8. Katherine Fleming | NREL

    Science.gov Websites

    Fleming Photo of Katherine Fleming Katherine Fleming Database and Web Applications Engineer and web application development in the Commercial Buildings Research group. Her projects include the , Katherine was pursuing a Ph.D. with a focus on robotics and working as a Web developer and Web accessibility

  9. 47 CFR 5.55 - Filing of applications.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... instead be submitted to the Commission's Office of Engineering and Technology, Washington, DC 20554...

  10. 47 CFR 5.55 - Filing of applications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... instead be submitted to the Commission's Office of Engineering and Technology, Washington, DC 20554...

  11. 47 CFR 5.55 - Filing of applications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... instead be submitted to the Commission's Office of Engineering and Technology, Washington, DC 20554...

  12. Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration

    NASA Technical Reports Server (NTRS)

    Lin, Risheng; Afjeh, Abdollah A.

    2003-01-01

    This paper discusses the detailed design of an XML databinding framework for aircraft engine simulation. The framework provides an object interface to access and use engine data. while at the same time preserving the meaning of the original data. The Language independent representation of engine component data enables users to move around XML data using HTTP through disparate networks. The application of this framework is demonstrated via a web-based turbofan propulsion system simulation using the World Wide Web (WWW). A Java Servlet based web component architecture is used for rendering XML engine data into HTML format and dealing with input events from the user, which allows users to interact with simulation data from a web browser. The simulation data can also be saved to a local disk for archiving or to restart the simulation at a later time.

  13. Concept Mapping Your Web Searches: A Design Rationale and Web-Enabled Application

    ERIC Educational Resources Information Center

    Lee, Y.-J.

    2004-01-01

    Although it has become very common to use World Wide Web-based information in many educational settings, there has been little research on how to better search and organize Web-based information. This paper discusses the shortcomings of Web search engines and Web browsers as learning environments and describes an alternative Web search environment…

  14. COEUS: “semantic web in a box” for biomedical applications

    PubMed Central

    2012-01-01

    Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/. PMID:23244467

  15. COEUS: "semantic web in a box" for biomedical applications.

    PubMed

    Lopes, Pedro; Oliveira, José Luís

    2012-12-17

    As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.

  16. Applying Web Usage Mining for Personalizing Hyperlinks in Web-Based Adaptive Educational Systems

    ERIC Educational Resources Information Center

    Romero, Cristobal; Ventura, Sebastian; Zafra, Amelia; de Bra, Paul

    2009-01-01

    Nowadays, the application of Web mining techniques in e-learning and Web-based adaptive educational systems is increasing exponentially. In this paper, we propose an advanced architecture for a personalization system to facilitate Web mining. A specific Web mining tool is developed and a recommender engine is integrated into the AHA! system in…

  17. Clinical software development for the Web: lessons learned from the BOADICEA project

    PubMed Central

    2012-01-01

    Background In the past 20 years, society has witnessed the following landmark scientific advances: (i) the sequencing of the human genome, (ii) the distribution of software by the open source movement, and (iii) the invention of the World Wide Web. Together, these advances have provided a new impetus for clinical software development: developers now translate the products of human genomic research into clinical software tools; they use open-source programs to build them; and they use the Web to deliver them. Whilst this open-source component-based approach has undoubtedly made clinical software development easier, clinical software projects are still hampered by problems that traditionally accompany the software process. This study describes the development of the BOADICEA Web Application, a computer program used by clinical geneticists to assess risks to patients with a family history of breast and ovarian cancer. The key challenge of the BOADICEA Web Application project was to deliver a program that was safe, secure and easy for healthcare professionals to use. We focus on the software process, problems faced, and lessons learned. Our key objectives are: (i) to highlight key clinical software development issues; (ii) to demonstrate how software engineering tools and techniques can facilitate clinical software development for the benefit of individuals who lack software engineering expertise; and (iii) to provide a clinical software development case report that can be used as a basis for discussion at the start of future projects. Results We developed the BOADICEA Web Application using an evolutionary software process. Our approach to Web implementation was conservative and we used conventional software engineering tools and techniques. The principal software development activities were: requirements, design, implementation, testing, documentation and maintenance. The BOADICEA Web Application has now been widely adopted by clinical geneticists and researchers. BOADICEA Web Application version 1 was released for general use in November 2007. By May 2010, we had > 1200 registered users based in the UK, USA, Canada, South America, Europe, Africa, Middle East, SE Asia, Australia and New Zealand. Conclusions We found that an evolutionary software process was effective when we developed the BOADICEA Web Application. The key clinical software development issues identified during the BOADICEA Web Application project were: software reliability, Web security, clinical data protection and user feedback. PMID:22490389

  18. Clinical software development for the Web: lessons learned from the BOADICEA project.

    PubMed

    Cunningham, Alex P; Antoniou, Antonis C; Easton, Douglas F

    2012-04-10

    In the past 20 years, society has witnessed the following landmark scientific advances: (i) the sequencing of the human genome, (ii) the distribution of software by the open source movement, and (iii) the invention of the World Wide Web. Together, these advances have provided a new impetus for clinical software development: developers now translate the products of human genomic research into clinical software tools; they use open-source programs to build them; and they use the Web to deliver them. Whilst this open-source component-based approach has undoubtedly made clinical software development easier, clinical software projects are still hampered by problems that traditionally accompany the software process. This study describes the development of the BOADICEA Web Application, a computer program used by clinical geneticists to assess risks to patients with a family history of breast and ovarian cancer. The key challenge of the BOADICEA Web Application project was to deliver a program that was safe, secure and easy for healthcare professionals to use. We focus on the software process, problems faced, and lessons learned. Our key objectives are: (i) to highlight key clinical software development issues; (ii) to demonstrate how software engineering tools and techniques can facilitate clinical software development for the benefit of individuals who lack software engineering expertise; and (iii) to provide a clinical software development case report that can be used as a basis for discussion at the start of future projects. We developed the BOADICEA Web Application using an evolutionary software process. Our approach to Web implementation was conservative and we used conventional software engineering tools and techniques. The principal software development activities were: requirements, design, implementation, testing, documentation and maintenance. The BOADICEA Web Application has now been widely adopted by clinical geneticists and researchers. BOADICEA Web Application version 1 was released for general use in November 2007. By May 2010, we had > 1200 registered users based in the UK, USA, Canada, South America, Europe, Africa, Middle East, SE Asia, Australia and New Zealand. We found that an evolutionary software process was effective when we developed the BOADICEA Web Application. The key clinical software development issues identified during the BOADICEA Web Application project were: software reliability, Web security, clinical data protection and user feedback.

  19. Photonics applications and web engineering: WILGA Summer 2016

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2016-09-01

    Wilga Summer 2016 Symposium on Photonics Applications and Web Engineering was held on 29 May - 06 June. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2016 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.

  20. Photonics applications and web engineering: WILGA Summer 2015

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2015-09-01

    Wilga Summer 2015 Symposium on Photonics Applications and Web Engineering was held on 23-31 May. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2015 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.

  1. Just-in-Time Web Searches for Trainers & Adult Educators.

    ERIC Educational Resources Information Center

    Kirk, James J.

    Trainers and adult educators often need to quickly locate quality information on the World Wide Web (WWW) and need assistance in searching for such information. A "search engine" is an application used to query existing information on the WWW. The three types of search engines are computer-generated indexes, directories, and meta search…

  2. MedlinePlus Connect: Web Application

    MedlinePlus

    ... will result in a query to the MedlinePlus search engine. If you specify a code and the name/ ... system or problem code, will use the MedlinePlus search engine (English only): https://connect.medlineplus.gov/application?mainSearchCriteria. ...

  3. Photonics Applications and Web Engineering: WILGA 2017

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2017-08-01

    XLth Wilga Summer 2017 Symposium on Photonics Applications and Web Engineering was held on 28 May-4 June 2017. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, modern optics, mechatronics, applied physics, electronics technologies and applications. There were presented around 300 oral and poster papers in a few main topical tracks, which are traditional for Wilga, including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Things, measurement systems for astronomy, high energy physics experiments, and other. The paper is a traditional introduction to the 2017 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations. This year Symposium was divided to the following topical sessions/conferences: Optics, Optoelectronics and Photonics, Computational and Artificial Intelligence, Biomedical Applications, Astronomical and High Energy Physics Experiments Applications, Material Research and Engineering, and Advanced Photonics and Electronics Applications in Research and Industry.

  4. Web-based Electronic Sharing and RE-allocation of Assets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverett, Dave; Miller, Robert A.; Berlin, Gary J.

    2002-09-09

    The Electronic Asses Sharing Program is a web-based application that provides the capability for complex-wide sharing and reallocation of assets that are excess, under utilized, or un-utilized. through a web-based fron-end and supporting has database with a search engine, users can search for assets that they need, search for assets needed by others, enter assets they need, and enter assets they have available for reallocation. In addition, entire listings of available assets and needed assets can be viewed. The application is written in Java, the hash database and search engine are in Object-oriented Java Database Management (OJDBM). The application willmore » be hosted on an SRS-managed server outside the Firewall and access will be controlled via a protected realm. An example of the application can be viewed at the followinig (temporary) URL: http://idgdev.srs.gov/servlet/srs.weshare.WeShare« less

  5. The Ontological Perspectives of the Semantic Web and the Metadata Harvesting Protocol: Applications of Metadata for Improving Web Search.

    ERIC Educational Resources Information Center

    Fast, Karl V.; Campbell, D. Grant

    2001-01-01

    Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…

  6. Using the Internet in Career Education. Practice Application Brief No. 1.

    ERIC Educational Resources Information Center

    Wagner, Judith O.

    The World Wide Web has a wealth of information on career planning, individual jobs, and job search methods that counselors and teachers can use. Search engines such as Yahoo! and Magellan, organized like library tools, and engines such as AltaVista and HotBot search words or phrases. Web indexes offer a variety of features. The criteria for…

  7. 75 FR 23306 - Southern Nuclear Operating Company, et al.: Supplementary Notice of Hearing and Opportunity To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-03

    ...'' field when using either the Web-based search (advanced search) engine or the ADAMS FIND tool in Citrix... should enter ``05200011'' in the ``Docket Number'' field in the web-based search (advanced search) engine... ML100740441. To search for documents in ADAMS using Vogtle Units 3 and 4 COL application docket numbers, 52...

  8. Using Web 2.0 Techniques in NASA's Ares Engineering Operations Network (AEON) Environment - First Impressions

    NASA Technical Reports Server (NTRS)

    Scott, David W.

    2010-01-01

    The Mission Operations Laboratory (MOL) at Marshall Space Flight Center (MSFC) is responsible for Engineering Support capability for NASA s Ares rocket development and operations. In pursuit of this, MOL is building the Ares Engineering and Operations Network (AEON), a web-based portal to support and simplify two critical activities: Access and analyze Ares manufacturing, test, and flight performance data, with access to Shuttle data for comparison Establish and maintain collaborative communities within the Ares teams/subteams and with other projects, e.g., Space Shuttle, International Space Station (ISS). AEON seeks to provide a seamless interface to a) locally developed engineering applications and b) a Commercial-Off-The-Shelf (COTS) collaborative environment that includes Web 2.0 capabilities, e.g., blogging, wikis, and social networking. This paper discusses how Web 2.0 might be applied to the typically conservative engineering support arena, based on feedback from Integration, Verification, and Validation (IV&V) testing and on searching for their use in similar environments.

  9. Lessons Learned from a Collaborative Sensor Web Prototype

    NASA Technical Reports Server (NTRS)

    Ames, Troy; Case, Lynne; Krahe, Chris; Hess, Melissa; Hennessy, Joseph F. (Technical Monitor)

    2002-01-01

    This paper describes the Sensor Web Application Prototype (SWAP) system that was developed for the Earth Science Technology Office (ESTO). The SWAP is aimed at providing an initial engineering proof-of-concept prototype highlighting sensor collaboration, dynamic cause-effect relationship between sensors, dynamic reconfiguration, and remote monitoring of sensor webs.

  10. Web application to access U.S. Army Corps of Engineers Civil Works and Restoration Projects information for the Rio Grande Basin, southern Colorado, New Mexico, and Texas

    USGS Publications Warehouse

    Archuleta, Christy-Ann M.; Eames, Deanna R.

    2009-01-01

    The Rio Grande Civil Works and Restoration Projects Web Application, developed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers (USACE) Albuquerque District, is designed to provide publicly available information through the Internet about civil works and restoration projects in the Rio Grande Basin. Since 1942, USACE Albuquerque District responsibilities have included building facilities for the U.S. Army and U.S. Air Force, providing flood protection, supplying water for power and public recreation, participating in fire remediation, protecting and restoring wetlands and other natural resources, and supporting other government agencies with engineering, contracting, and project management services. In the process of conducting this vast array of engineering work, the need arose for easily tracking the locations of and providing information about projects to stakeholders and the public. This fact sheet introduces a Web application developed to enable users to visualize locations and search for information about USACE (and some other Federal, State, and local) projects in the Rio Grande Basin in southern Colorado, New Mexico, and Texas.

  11. Keeping Dublin Core Simple: Cross-Domain Discovery or Resource Description?; First Steps in an Information Commerce Economy: Digital Rights Management in the Emerging E-Book Environment; Interoperability: Digital Rights Management and the Emerging EBook Environment; Searching the Deep Web: Direct Query Engine Applications at the Department of Energy.

    ERIC Educational Resources Information Center

    Lagoze, Carl; Neylon, Eamonn; Mooney, Stephen; Warnick, Walter L.; Scott, R. L.; Spence, Karen J.; Johnson, Lorrie A.; Allen, Valerie S.; Lederman, Abe

    2001-01-01

    Includes four articles that discuss Dublin Core metadata, digital rights management and electronic books, including interoperability; and directed query engines, a type of search engine designed to access resources on the deep Web that is being used at the Department of Energy. (LRW)

  12. Open Marketplace for Simulation Software on the Basis of a Web Platform

    NASA Astrophysics Data System (ADS)

    Kryukov, A. P.; Demichev, A. P.

    2016-02-01

    The focus in development of a new generation of middleware shifts from the global grid systems to building convenient and efficient web platforms for remote access to individual computing resources. Further line of their development, suggested in this work, is related not only with the quantitative increase in their number and with the expansion of scientific, engineering, and manufacturing areas in which they are used, but also with improved technology for remote deployment of application software on the resources interacting with the web platforms. Currently, the services for providers of application software in the context of scientific-oriented web platforms is not developed enough. The proposed in this work new web platforms of application software market should have all the features of the existing web platforms for submissions of jobs to remote resources plus the provision of specific web services for interaction on market principles between the providers and consumers of application packages. The suggested approach will be approved on the example of simulation applications in the field of nonlinear optics.

  13. Spaceport Command and Control System Support Software Development

    NASA Technical Reports Server (NTRS)

    Brunotte, Leonard

    2016-01-01

    The Spaceport Command and Control System (SCCS) is a project developed and used by NASA at Kennedy Space Center in order to control and monitor the Space Launch System (SLS) at the time of its launch. One integral subteam under SCCS is the one assigned to the development of a data set building application to be used both on the launch pad and in the Launch Control Center (LCC) at the time of launch. This web application was developed in Ruby on Rails, a web framework using the Ruby object-oriented programming language, by a 15 - employee team (approx.). Because this application is such a huge undertaking with many facets and iterations, there were a few areas in which work could be more easily organized and expedited. As an intern working with this team, I was charged with the task of writing web applications that fulfilled this need, creating a virtual and highly customizable whiteboard in order to allow engineers to keep track of build iterations and their status. Additionally, I developed a knowledge capture web application wherein any engineer or contractor within SCCS could ask a question, answer an existing question, or leave a comment on any question or answer, similar to Stack Overflow.

  14. Modeling Rich Interactions for Web Search Intent Inference, Ranking and Evaluation

    ERIC Educational Resources Information Center

    Guo, Qi

    2012-01-01

    Billions of people interact with Web search engines daily and their interactions provide valuable clues about their interests and preferences. While modeling search behavior, such as queries and clicks on results, has been found to be effective for various Web search applications, the effectiveness of the existing approaches are limited by…

  15. Research on the optimization strategy of web search engine based on data mining

    NASA Astrophysics Data System (ADS)

    Chen, Ronghua

    2018-04-01

    With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.

  16. Web-based UMLS concept retrieval by automatic text scanning: a comparison of two methods.

    PubMed

    Brandt, C; Nadkarni, P

    2001-01-01

    The Web is increasingly the medium of choice for multi-user application program delivery. Yet selection of an appropriate programming environment for rapid prototyping, code portability, and maintainability remain issues. We summarize our experience on the conversion of a LISP Web application, Search/SR to a new, functionally identical application, Search/SR-ASP using a relational database and active server pages (ASP) technology. Our results indicate that provision of easy access to database engines and external objects is almost essential for a development environment to be considered viable for rapid and robust application delivery. While LISP itself is a robust language, its use in Web applications may be hard to justify given that current vendor implementations do not provide such functionality. Alternative, currently available scripting environments for Web development appear to have most of LISP's advantages and few of its disadvantages.

  17. Web services in the U.S. geological survey streamstats web application

    USGS Publications Warehouse

    Guthrie, J.D.; Dartiguenave, C.; Ries, Kernell G.

    2009-01-01

    StreamStats is a U.S. Geological Survey Web-based GIS application developed as a tool for waterresources planning and management, engineering design, and other applications. StreamStats' primary functionality allows users to obtain drainage-basin boundaries, basin characteristics, and streamflow statistics for gaged and ungaged sites. Recently, Web services have been developed that provide the capability to remote users and applications to access comprehensive GIS tools that are available in StreamStats, including delineating drainage-basin boundaries, computing basin characteristics, estimating streamflow statistics for user-selected locations, and determining point features that coincide with a National Hydrography Dataset (NHD) reach address. For the state of Kentucky, a web service also has been developed that provides users the ability to estimate daily time series of drainage-basin average values of daily precipitation and temperature. The use of web services allows the user to take full advantage of the datasets and processes behind the Stream Stats application without having to develop and maintain them. ?? 2009 IEEE.

  18. Web-Based Distributed Simulation of Aeronautical Propulsion System

    NASA Technical Reports Server (NTRS)

    Zheng, Desheng; Follen, Gregory J.; Pavlik, William R.; Kim, Chan M.; Liu, Xianyou; Blaser, Tammy M.; Lopez, Isaac

    2001-01-01

    An application was developed to allow users to run and view the Numerical Propulsion System Simulation (NPSS) engine simulations from web browsers. Simulations were performed on multiple INFORMATION POWER GRID (IPG) test beds. The Common Object Request Broker Architecture (CORBA) was used for brokering data exchange among machines and IPG/Globus for job scheduling and remote process invocation. Web server scripting was performed by JavaServer Pages (JSP). This application has proven to be an effective and efficient way to couple heterogeneous distributed components.

  19. Geo-Engineering through Internet Informatics (GEMINI)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doveton, John H.; Watney, W. Lynn

    The program, for development and methodologies, was a 3-year interdisciplinary effort to develop an interactive, integrated Internet Website named GEMINI (Geo-Engineering Modeling through Internet Informatics) that would build real-time geo-engineering reservoir models for the Internet using the latest technology in Web applications.

  20. 40 CFR 63.820 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., or wide-web flexographic printing presses are operated, and (2) Each new and existing facility at which publication rotogravure, product and packaging rotogravure, or wide-web flexographic printing... specify, using the best monitoring methods and engineering judgment, the amount of excess emissions that...

  1. Regional Geology Web Map Application Development: Javascript v2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, Glenn

    This is a milestone report for the FY2017 continuation of the Spent Fuel, Storage, and Waste, Technology (SFSWT) program (formerly Used Fuel Disposal (UFD) program) development of the Regional Geology Web Mapping Application by the Idaho National Laboratory Geospatial Science and Engineering group. This application was developed for general public use and is an interactive web-based application built in Javascript to visualize, reference, and analyze US pertinent geological features of the SFSWT program. This tool is a version upgrade from Adobe FLEX technology. It is designed to facilitate informed decision making of the geology of continental US relevant to themore » SFSWT program.« less

  2. Online Data Resources in Chemical Engineering Education: Impact of the Uncertainty Concept for Thermophysical Properties

    ERIC Educational Resources Information Center

    Kim, Sun Hyung; Kang, Jeong Won; Kroenlein, Kenneth; Magee, Joseph W.; Diky, Vladimir; Muzny, Chris D.; Kazakov, Andrei F.; Chirico, Robert D.; Frenkel, Michael

    2013-01-01

    We review the concept of uncertainty for thermophysical properties and its critical impact for engineering applications in the core courses of chemical engineering education. To facilitate the translation of developments to engineering education, we employ NIST Web Thermo Tables to furnish properties data with their associated expanded…

  3. Design of web platform for science and engineering in the model of open market

    NASA Astrophysics Data System (ADS)

    Demichev, A. P.; Kryukov, A. P.

    2016-09-01

    This paper presents a design and operation algorithms of a web-platform for convenient, secure and effective remote interaction on the principles of the open market of users and providers of scientific application software and databases.

  4. Evaluation of the Aurora Application Shade Measurement Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-12-01

    Aurora is an integrated, Web-based application that helps solar installers perform sales, engineering design, and financial analysis. One of Aurora's key features is its high-resolution remote shading analysis.

  5. U.S. Seismic Design Maps Web Application

    NASA Astrophysics Data System (ADS)

    Martinez, E.; Fee, J.

    2015-12-01

    The application computes earthquake ground motion design parameters compatible with the International Building Code and other seismic design provisions. It is the primary method for design engineers to obtain ground motion parameters for multiple building codes across the country. When designing new buildings and other structures, engineers around the country use the application. Users specify the design code of interest, location, and other parameters to obtain necessary ground motion information consisting of a high-level executive summary as well as detailed information including maps, data, and graphs. Results are formatted such that they can be directly included in a final engineering report. In addition to single-site analysis, the application supports a batch mode for simultaneous consideration of multiple locations. Finally, an application programming interface (API) is available which allows other application developers to integrate this application's results into larger applications for additional processing. Development on the application has proceeded in an iterative manner working with engineers through email, meetings, and workshops. Each iteration provided new features, improved performance, and usability enhancements. This development approach positioned the application to be integral to the structural design process and is now used to produce over 1800 reports daily. Recent efforts have enhanced the application to be a data-driven, mobile-first, responsive web application. Development is ongoing, and source code has recently been published into the open-source community on GitHub. Open-sourcing the code facilitates improved incorporation of user feedback to add new features ensuring the application's continued success.

  6. 40 CFR 63.460 - Applicability and designation of source.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 18, material safety data sheets, or engineering calculations. Wipe cleaning activities, such as using... continuous web cleaning machine subject to this subpart shall achieve compliance with the provisions of this... products, solvent cleaning machines used in the manufacture of narrow tubing, and continuous web cleaning...

  7. 40 CFR 63.460 - Applicability and designation of source.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 18, material safety data sheets, or engineering calculations. Wipe cleaning activities, such as using... continuous web cleaning machine subject to this subpart shall achieve compliance with the provisions of this... products, solvent cleaning machines used in the manufacture of narrow tubing, and continuous web cleaning...

  8. WEbcoli: an interactive and asynchronous web application for in silico design and analysis of genome-scale E.coli model.

    PubMed

    Jung, Tae-Sung; Yeo, Hock Chuan; Reddy, Satty G; Cho, Wan-Sup; Lee, Dong-Yup

    2009-11-01

    WEbcoli is a WEb application for in silico designing, analyzing and engineering Escherichia coli metabolism. It is devised and implemented using advanced web technologies, thereby leading to enhanced usability and dynamic web accessibility. As a main feature, the WEbcoli system provides a user-friendly rich web interface, allowing users to virtually design and synthesize mutant strains derived from the genome-scale wild-type E.coli model and to customize pathways of interest through a graph editor. In addition, constraints-based flux analysis can be conducted for quantifying metabolic fluxes and charactering the physiological and metabolic states under various genetic and/or environmental conditions. WEbcoli is freely accessible at http://webcoli.org. cheld@nus.edu.sg.

  9. 78 FR 42720 - Airworthiness Directives; The Boeing Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-17

    ... airplane reaching its limit of validity (LOV) of the engineering data that support the established structural maintenance program. For certain airplanes, this proposed AD would require modification of the web... would require an inspection for cracks in the web, and repair or modification as applicable. We are...

  10. 78 FR 65185 - Airworthiness Directives; The Boeing Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... airplane reaching its limit of validity (LOV) of the engineering data that support the established structural maintenance program. This AD requires, for certain airplanes, a modification of the web of the... cracks in the web, and repair or modification as applicable. We are issuing this AD to prevent cracking...

  11. Analysis and Development of a Web-Enabled Planning and Scheduling Database Application

    DTIC Science & Technology

    2013-09-01

    establishes an entity—relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web- enabled interface for...development, develop, design, process, re- engineering, reengineering, MySQL , structured query language, SQL, myPHPadmin. 15. NUMBER OF PAGES 107 16...relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web-enabled interface for the population of

  12. Start Your Search Engines. Part One: Taming Google--and Other Tips to Master Web Searches

    ERIC Educational Resources Information Center

    Adam, Anna; Mowers, Helen

    2008-01-01

    There are a lot of useful tools on the Web, all those social applications, and the like. Still most people go online for one thing--to perform a basic search. For most fact-finding missions, the Web is there. But--as media specialists well know--the sheer wealth of online information can hamper efforts to focus on a few reliable references.…

  13. A Template Engine for Parsing Objects from Textual Representations

    NASA Astrophysics Data System (ADS)

    Rajković, Milan; Stanković, Milena; Marković, Ivica

    2011-09-01

    Template engines are widely used for separation of business and presentation logic. They are commonly used in web applications for clean rendering of HTML pages. Another area of usage is message formatting in distributed applications where they transform objects to appropriate representations. This paper explores the possibility of using templates for a reverse process—for creating objects starting from their representations. We present the prototype of engine that we have developed, and describe benefits and drawbacks of this approach.

  14. A Method for Transforming Existing Web Service Descriptions into an Enhanced Semantic Web Service Framework

    NASA Astrophysics Data System (ADS)

    Du, Xiaofeng; Song, William; Munro, Malcolm

    Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.

  15. 75 FR 54163 - Notice of Availability: Notice of Funding Availability (NOFA) for the Fiscal Year (FY) 2009...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-03

    ... the availability on its Web site of the applicant information, submission deadlines, funding criteria... funding for architectural and engineering work, site control, and other planning related expenses that are... using the Department of Housing and Urban Development agency link on the Grants.gov/Find Web site at...

  16. The Common Gateway Interface (CGI) for Enhancing Access to Database Servers via the World Wide Web (WWW).

    ERIC Educational Resources Information Center

    Machovec, George S., Ed.

    1995-01-01

    Explains the Common Gateway Interface (CGI) protocol as a set of rules for passing information from a Web server to an external program such as a database search engine. Topics include advantages over traditional client/server solutions, limitations, sample library applications, and sources of information from the Internet. (LRW)

  17. Integrating Engineering Data Systems for NASA Spaceflight Projects

    NASA Technical Reports Server (NTRS)

    Carvalho, Robert E.; Tollinger, Irene; Bell, David G.; Berrios, Daniel C.

    2012-01-01

    NASA has a large range of custom-built and commercial data systems to support spaceflight programs. Some of the systems are re-used by many programs and projects over time. Management and systems engineering processes require integration of data across many of these systems, a difficult problem given the widely diverse nature of system interfaces and data models. This paper describes an ongoing project to use a central data model with a web services architecture to support the integration and access of linked data across engineering functions for multiple NASA programs. The work involves the implementation of a web service-based middleware system called Data Aggregator to bring together data from a variety of systems to support space exploration. Data Aggregator includes a central data model registry for storing and managing links between the data in disparate systems. Initially developed for NASA's Constellation Program needs, Data Aggregator is currently being repurposed to support the International Space Station Program and new NASA projects with processes that involve significant aggregating and linking of data. This change in user needs led to development of a more streamlined data model registry for Data Aggregator in order to simplify adding new project application data as well as standardization of the Data Aggregator query syntax to facilitate cross-application querying by client applications. This paper documents the approach from a set of stand-alone engineering systems from which data are manually retrieved and integrated, to a web of engineering data systems from which the latest data are automatically retrieved and more quickly and accurately integrated. This paper includes the lessons learned through these efforts, including the design and development of a service-oriented architecture and the evolution of the data model registry approaches as the effort continues to evolve and adapt to support multiple NASA programs and priorities.

  18. 'Televaluation' of clinical information systems: an integrative approach to assessing Web-based systems.

    PubMed

    Kushniruk, A W; Patel, C; Patel, V L; Cimino, J J

    2001-04-01

    The World Wide Web provides an unprecedented opportunity for widespread access to health-care applications by both patients and providers. The development of new methods for assessing the effectiveness and usability of these systems is becoming a critical issue. This paper describes the distance evaluation (i.e. 'televaluation') of emerging Web-based information technologies. In health informatics evaluation, there is a need for application of new ideas and methods from the fields of cognitive science and usability engineering. A framework is presented for conducting evaluations of health-care information technologies that integrates a number of methods, ranging from deployment of on-line questionnaires (and Web-based forms) to remote video-based usability testing of user interactions with clinical information systems. Examples illustrating application of these techniques are presented for the assessment of a patient clinical information system (PatCIS), as well as an evaluation of use of Web-based clinical guidelines. Issues in designing, prototyping and iteratively refining evaluation components are discussed, along with description of a 'virtual' usability laboratory.

  19. Developing Distributed Collaboration Systems at NASA: A Report from the Field

    NASA Technical Reports Server (NTRS)

    Becerra-Fernandez, Irma; Stewart, Helen; Knight, Chris; Norvig, Peter (Technical Monitor)

    2001-01-01

    Web-based collaborative systems have assumed a pivotal role in the information systems development arena. While business to customers (B-to-C) and business to business (B-to-B) electronic commerce systems, search engines, and chat sites are the focus of attention, web-based systems span the gamut of information systems that were traditionally confined to internal organizational client server networks. For example, the Domino Application Server allows Lotus Notes (trademarked) uses to build collaborative intranet applications and mySAP.com (trademarked) enables web portals and e-commerce applications for SAP users. This paper presents the experiences in the development of one such system: Postdoc, a government off-the-shelf web-based collaborative environment. Issues related to the design of web-based collaborative information systems, including lessons learned from the development and deployment of the system as well as measured performance, are presented in this paper. Finally, the limitations of the implementation approach as well as future plans are presented as well.

  20. Capturing, Sharing, and Discovering Product Data at a Semantic Level--Moving Forward to the Semantic Web for Advancing the Engineering Product Design Process

    ERIC Educational Resources Information Center

    Zhu, Lijuan

    2011-01-01

    Along with the greater productivity that CAD automation provides nowadays, the product data of engineering applications needs to be shared and managed efficiently to gain a competitive edge for the engineering product design. However, exchanging and sharing the heterogeneous product data is still challenging. This dissertation first presents a…

  1. Design Options for Multimodal Web Applications

    NASA Astrophysics Data System (ADS)

    Stanciulescu, Adrian; Vanderdonckt, Jean

    The capabilities of multimodal applications running on the web are well de-lineated since they are mainly constrained by what their underlying standard mark up language offers, as opposed to hand-made multimodal applications. As the experience in developing such multimodal web applications is growing, the need arises to identify and define major design options of such application to pave the way to a structured development life cycle. This paper provides a design space of independent design options for multimodal web applications based on three types of modalities: graphical, vocal, tactile, and combined. On the one hand, these design options may provide designers with some explicit guidance on what to decide or not for their future user interface, while exploring various design alternatives. On the other hand, these design options have been implemented as graph transformations per-formed on a user interface model represented as a graph. Thanks to a transformation engine, it allows designers to play with the different values of each design option, to preview the results of the transformation, and to obtain the corresponding code on-demand

  2. Developing a search engine for pharmacotherapeutic information that is not published in biomedical journals.

    PubMed

    Do Pazo-Oubiña, F; Calvo Pita, C; Puigventós Latorre, F; Periañez-Párraga, L; Ventayol Bosch, P

    2011-01-01

    To identify publishers of pharmacotherapeutic information not found in biomedical journals that focuses on evaluating and providing advice on medicines and to develop a search engine to access this information. Compiling web sites that publish information on the rational use of medicines and have no commercial interests. Free-access web sites in Spanish, Galician, Catalan or English. Designing a search engine using the Google "custom search" application. Overall 159 internet addresses were compiled and were classified into 9 labels. We were able to recover the information from the selected sources using a search engine, which is called "AlquimiA" and available from http://www.elcomprimido.com/FARHSD/AlquimiA.htm. The main sources of pharmacotherapeutic information not published in biomedical journals were identified. The search engine is a useful tool for searching and accessing "grey literature" on the internet. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  3. Semantic Web Infrastructure Supporting NextFrAMES Modeling Platform

    NASA Astrophysics Data System (ADS)

    Lakhankar, T.; Fekete, B. M.; Vörösmarty, C. J.

    2008-12-01

    Emerging modeling frameworks offer new ways to modelers to develop model applications by offering a wide range of software components to handle common modeling tasks such as managing space and time, distributing computational tasks in parallel processing environment, performing input/output and providing diagnostic facilities. NextFrAMES, the next generation updates to the Framework for Aquatic Modeling of the Earth System originally developed at University of New Hampshire and currently hosted at The City College of New York takes a step further by hiding most of these services from modeler behind a platform agnostic modeling platform that allows scientists to focus on the implementation of scientific concepts in the form of a new modeling markup language and through a minimalist application programming interface that provide means to implement model processes. At the core of the NextFrAMES modeling platform there is a run-time engine that interprets the modeling markup language loads the module plugins establishes the model I/O and executes the model defined by the modeling XML and the accompanying plugins. The current implementation of the run-time engine is designed for single processor or symmetric multi processing (SMP) systems but future implementation of the run-time engine optimized for different hardware architectures are anticipated. The modeling XML and the accompanying plugins define the model structure and the computational processes in a highly abstract manner, which is not only suitable for the run-time engine, but has the potential to integrate into semantic web infrastructure, where intelligent parsers can extract information about the model configurations such as input/output requirements applicable space and time scales and underlying modeling processes. The NextFrAMES run-time engine itself is also designed to tap into web enabled data services directly, therefore it can be incorporated into complex workflow to implement End-to-End application from observation to the delivery of highly aggregated information. Our presentation will discuss the web services ranging from OpenDAP and WaterOneFlow data services to metadata provided through catalog services that could serve NextFrAMES modeling applications. We will also discuss the support infrastructure needed to streamline the integration of NextFrAMES into an End-to-End application to deliver highly processed information to end users. The End-to-End application will be demonstrated through examples from the State-of-the Global Water System effort that builds on data services provided through WMO's Global Terrestrial Network for Hydrology to deliver water resources related information to policy makers for better water management. Key components of this E2E system are promoted as Community of Practice examples for the Global Observing System of Systems therefore the State-of-the Global Water System can be viewed as test case for the interoperability of the incorporated web service components.

  4. Encouraging the learning of hydraulic engineering subjects in agricultural engineering schools

    NASA Astrophysics Data System (ADS)

    Rodríguez Sinobas, Leonor; Sánchez Calvo, Raúl

    2014-09-01

    Several methodological approaches to improve the understanding and motivation of students in Hydraulic Engineering courses have been adopted in the Agricultural Engineering School at Technical University of Madrid. During three years student's progress and satisfaction have been assessed by continuous monitoring and the use of 'online' and web tools in two undergraduate courses. Results from their application to encourage learning and communication skills in Hydraulic Engineering subjects are analysed and compared to the initial situation. Student's academic performance has improved since their application, but surveys made among students showed that not all the methodological proposals were perceived as beneficial. Their participation in the 'online', classroom and reading activities was low although they were well assessed.

  5. 40 CFR 1065.1 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... regulated under vehicle-based, equipment-based, or vessel-based standards, use good engineering judgment to...) For additional information regarding these test procedures, visit our Web site at http://www.epa.gov...

  6. 40 CFR 1065.1 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... regulated under vehicle-based, equipment-based, or vessel-based standards, use good engineering judgment to...) For additional information regarding these test procedures, visit our Web site at http://www.epa.gov...

  7. Design and Implementation of a Prototype Ontology Aided Knowledge Discovery Assistant (OAKDA) Application

    DTIC Science & Technology

    2006-12-01

    speed of search engines improves the efficiency of such methods, effectiveness is not improved. The objective of this thesis is to construct and test...interest, users are assisted in finding a relevant set of key terms that will aid the search engines in narrowing, widening, or refocusing a Web search

  8. Agile Development of Various Computational Power Adaptive Web-Based Mobile-Learning Software Using Mobile Cloud Computing

    ERIC Educational Resources Information Center

    Zadahmad, Manouchehr; Yousefzadehfard, Parisa

    2016-01-01

    Mobile Cloud Computing (MCC) aims to improve all mobile applications such as m-learning systems. This study presents an innovative method to use web technology and software engineering's best practices to provide m-learning functionalities hosted in a MCC-learning system as service. Components hosted by MCC are used to empower developers to create…

  9. Fast segmentation of satellite images using SLIC, WebGL and Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Donchyts, Gennadii; Baart, Fedor; Gorelick, Noel; Eisemann, Elmar; van de Giesen, Nick

    2017-04-01

    Google Earth Engine (GEE) is a parallel geospatial processing platform, which harmonizes access to petabytes of freely available satellite images. It provides a very rich API, allowing development of dedicated algorithms to extract useful geospatial information from these images. At the same time, modern GPUs provide thousands of computing cores, which are mostly not utilized in this context. In the last years, WebGL became a popular and well-supported API, allowing fast image processing directly in web browsers. In this work, we will evaluate the applicability of WebGL to enable fast segmentation of satellite images. A new implementation of a Simple Linear Iterative Clustering (SLIC) algorithm using GPU shaders will be presented. SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It adapts a k-means clustering approach to generate superpixels efficiently. While this approach will be hard to scale, due to a significant amount of data to be transferred to the client, it should significantly improve exploratory possibilities and simplify development of dedicated algorithms for geoscience applications. Our prototype implementation will be used to improve surface water detection of the reservoirs using multispectral satellite imagery.

  10. Adding a visualization feature to web search engines: it's time.

    PubMed

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  11. Using ruby on rails to develop a web interface: a research-based exemplar with a computerized physical activity reporter.

    PubMed

    Blaz, Jacquelyn W; Pearce, Patricia F

    2009-01-01

    The world is becoming increasingly web-based. Health care institutions are utilizing the web for personal health records, surveillance, communication, and education; health care researchers are finding value in using the web for research subject recruitment, data collection, and follow-up. Programming languages, such as Java, require knowledge and experience usually found only in software engineers and consultants. The purpose of this paper is to demonstrate Ruby on Rails as a feasible alternative for programming questionnaires for use on the web. Ruby on Rails was specifically designed for the development, deployment, and maintenance of database-backed web applications. It is flexible, customizable, and easy to learn. With a relatively little initial training, a novice programmer can create a robust web application in a small amount of time, without the need of a software consultant. The translation of the Children's Computerized Physical Activity Reporter (C-CPAR) from a local installation in Microsoft Access to a web-based format utilizing Ruby on Rails is given as an example.

  12. A Full-Text-Based Search Engine for Finding Highly Matched Documents Across Multiple Categories

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung D.; Steele, Gynelle C.

    2016-01-01

    This report demonstrates the full-text-based search engine that works on any Web-based mobile application. The engine has the capability to search databases across multiple categories based on a user's queries and identify the most relevant or similar. The search results presented here were found using an Android (Google Co.) mobile device; however, it is also compatible with other mobile phones.

  13. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    NASA Astrophysics Data System (ADS)

    Victorine, John; Watney, W. Lynn; Bhattacharya, Saibal

    2005-11-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling.

  14. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    USGS Publications Warehouse

    Victorine, J.; Watney, W.L.; Bhattacharya, S.

    2005-01-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.

  15. Designing a Pedagogical Model for Web Engineering Education: An Evolutionary Perspective

    ERIC Educational Resources Information Center

    Hadjerrouit, Said

    2005-01-01

    In contrast to software engineering, which relies on relatively well established development approaches, there is a lack of a proven methodology that guides Web engineers in building reliable and effective Web-based systems. Currently, Web engineering lacks process models, architectures, suitable techniques and methods, quality assurance, and a…

  16. GLobal Integrated Design Environment

    NASA Technical Reports Server (NTRS)

    Kunkel, Matthew; McGuire, Melissa; Smith, David A.; Gefert, Leon P.

    2011-01-01

    The GLobal Integrated Design Environment (GLIDE) is a collaborative engineering application built to resolve the design session issues of real-time passing of data between multiple discipline experts in a collaborative environment. Utilizing Web protocols and multiple programming languages, GLIDE allows engineers to use the applications to which they are accustomed in this case, Excel to send and receive datasets via the Internet to a database-driven Web server. Traditionally, a collaborative design session consists of one or more engineers representing each discipline meeting together in a single location. The discipline leads exchange parameters and iterate through their respective processes to converge on an acceptable dataset. In cases in which the engineers are unable to meet, their parameters are passed via e-mail, telephone, facsimile, or even postal mail. The result of this slow process of data exchange would elongate a design session to weeks or even months. While the iterative process remains in place, software can now exchange parameters securely and efficiently, while at the same time allowing for much more information about a design session to be made available. GLIDE is written in a compilation of several programming languages, including REALbasic, PHP, and Microsoft Visual Basic. GLIDE client installers are available to download for both Microsoft Windows and Macintosh systems. The GLIDE client software is compatible with Microsoft Excel 2000 or later on Windows systems, and with Microsoft Excel X or later on Macintosh systems. GLIDE follows the Client-Server paradigm, transferring encrypted and compressed data via standard Web protocols. Currently, the engineers use Excel as a front end to the GLIDE Client, as many of their custom tools run in Excel.

  17. Workflow and web application for annotating NCBI BioProject transcriptome data

    PubMed Central

    Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A.; Barrero, Luz S.; Landsman, David

    2017-01-01

    Abstract The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. Database URL: http://www.ncbi.nlm.nih.gov/projects/physalis/ PMID:28605765

  18. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    ERIC Educational Resources Information Center

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  19. Study on online community user motif using web usage mining

    NASA Astrophysics Data System (ADS)

    Alphy, Meera; Sharma, Ajay

    2016-04-01

    The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.

  20. Location-based Web Search

    NASA Astrophysics Data System (ADS)

    Ahlers, Dirk; Boll, Susanne

    In recent years, the relation of Web information to a physical location has gained much attention. However, Web content today often carries only an implicit relation to a location. In this chapter, we present a novel location-based search engine that automatically derives spatial context from unstructured Web resources and allows for location-based search: our focused crawler applies heuristics to crawl and analyze Web pages that have a high probability of carrying a spatial relation to a certain region or place; the location extractor identifies the actual location information from the pages; our indexer assigns a geo-context to the pages and makes them available for a later spatial Web search. We illustrate the usage of our spatial Web search for location-based applications that provide information not only right-in-time but also right-on-the-spot.

  1. 47 CFR 5.61 - Procedure for obtaining a special temporary authorization.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... must be submitted electronically through the Office of Engineering and Technology Web site http://www... application for STA shall contain the following information: (1) Name, address, phone number (also email...

  2. 47 CFR 5.61 - Procedure for obtaining a special temporary authorization.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... must be submitted electronically through the Office of Engineering and Technology Web site http://www... application for STA shall contain the following information: (1) Name, address, phone number (also email...

  3. A Generic Evaluation Model for Semantic Web Services

    NASA Astrophysics Data System (ADS)

    Shafiq, Omair

    Semantic Web Services research has gained momentum over the last few Years and by now several realizations exist. They are being used in a number of industrial use-cases. Soon software developers will be expected to use this infrastructure to build their B2B applications requiring dynamic integration. However, there is still a lack of guidelines for the evaluation of tools developed to realize Semantic Web Services and applications built on top of them. In normal software engineering practice such guidelines can already be found for traditional component-based systems. Also some efforts are being made to build performance models for servicebased systems. Drawing on these related efforts in component-oriented and servicebased systems, we identified the need for a generic evaluation model for Semantic Web Services applicable to any realization. The generic evaluation model will help users and customers to orient their systems and solutions towards using Semantic Web Services. In this chapter, we have presented the requirements for the generic evaluation model for Semantic Web Services and further discussed the initial steps that we took to sketch such a model. Finally, we discuss related activities for evaluating semantic technologies.

  4. Semantic similarity measure in biomedical domain leverage web search engine.

    PubMed

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  5. Application of Microsoft's ActiveX and DirectX technologies to the visulization of physical system dynamics

    NASA Astrophysics Data System (ADS)

    Mann, Christopher; Narasimhamurthi, Natarajan

    1998-08-01

    This paper discusses a specific implementation of a web and complement based simulation systems. The overall simulation container is implemented within a web page viewed with Microsoft's Internet Explorer 4.0 web browser. Microsoft's ActiveX/Distributed Component Object Model object interfaces are used in conjunction with the Microsoft DirectX graphics APIs to provide visualization functionality for the simulation. The MathWorks' Matlab computer aided control system design program is used as an ActiveX automation server to provide the compute engine for the simulations.

  6. GLobal Integrated Design Environment (GLIDE): A Concurrent Engineering Application

    NASA Technical Reports Server (NTRS)

    McGuire, Melissa L.; Kunkel, Matthew R.; Smith, David A.

    2010-01-01

    The GLobal Integrated Design Environment (GLIDE) is a client-server software application purpose-built to mitigate issues associated with real time data sharing in concurrent engineering environments and to facilitate discipline-to-discipline interaction between multiple engineers and researchers. GLIDE is implemented in multiple programming languages utilizing standardized web protocols to enable secure parameter data sharing between engineers and researchers across the Internet in closed and/or widely distributed working environments. A well defined, HyperText Transfer Protocol (HTTP) based Application Programming Interface (API) to the GLIDE client/server environment enables users to interact with GLIDE, and each other, within common and familiar tools. One such common tool, Microsoft Excel (Microsoft Corporation), paired with its add-in API for GLIDE, is discussed in this paper. The top-level examples given demonstrate how this interface improves the efficiency of the design process of a concurrent engineering study while reducing potential errors associated with manually sharing information between study participants.

  7. 47 CFR 101.103 - Frequency coordination procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... requirements of § 101.21(f). In engineering a system or modification thereto, the applicant must, by....2-12.7 GHz frequency band and maintain an Internet web site of all existing transmitting sites and...

  8. 47 CFR 101.103 - Frequency coordination procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... requirements of § 101.21(f). In engineering a system or modification thereto, the applicant must, by....2-12.7 GHz frequency band and maintain an Internet web site of all existing transmitting sites and...

  9. Tags Extarction from Spatial Documents in Search Engines

    NASA Astrophysics Data System (ADS)

    Borhaninejad, S.; Hakimpour, F.; Hamzei, E.

    2015-12-01

    Nowadays the selective access to information on the Web is provided by search engines, but in the cases which the data includes spatial information the search task becomes more complex and search engines require special capabilities. The purpose of this study is to extract the information which lies in spatial documents. To that end, we implement and evaluate information extraction from GML documents and a retrieval method in an integrated approach. Our proposed system consists of three components: crawler, database and user interface. In crawler component, GML documents are discovered and their text is parsed for information extraction; storage. The database component is responsible for indexing of information which is collected by crawlers. Finally the user interface component provides the interaction between system and user. We have implemented this system as a pilot system on an Application Server as a simulation of Web. Our system as a spatial search engine provided searching capability throughout the GML documents and thus an important step to improve the efficiency of search engines has been taken.

  10. EntrezAJAX: direct web browser access to the Entrez Programming Utilities.

    PubMed

    Loman, Nicholas J; Pallen, Mark J

    2010-06-21

    Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/

  11. Workflow and web application for annotating NCBI BioProject transcriptome data.

    PubMed

    Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A; Barrero, Luz S; Landsman, David; Mariño-Ramírez, Leonardo

    2017-01-01

    The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. URL: http://www.ncbi.nlm.nih.gov/projects/physalis/. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.

  12. WebCIS: large scale deployment of a Web-based clinical information system.

    PubMed

    Hripcsak, G; Cimino, J J; Sengupta, S

    1999-01-01

    WebCIS is a Web-based clinical information system. It sits atop the existing Columbia University clinical information system architecture, which includes a clinical repository, the Medical Entities Dictionary, an HL7 interface engine, and an Arden Syntax based clinical event monitor. WebCIS security features include authentication with secure tokens, authorization maintained in an LDAP server, SSL encryption, permanent audit logs, and application time outs. WebCIS is currently used by 810 physicians at the Columbia-Presbyterian center of New York Presbyterian Healthcare to review and enter data into the electronic medical record. Current deployment challenges include maintaining adequate database performance despite complex queries, replacing large numbers of computers that cannot run modern Web browsers, and training users that have never logged onto the Web. Although the raised expectations and higher goals have increased deployment costs, the end result is a far more functional, far more available system.

  13. Searching the world wide Web

    PubMed

    Lawrence; Giles

    1998-04-03

    The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the "indexable Web," the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages.

  14. CCBuilder 2.0: Powerful and accessible coiled-coil modeling.

    PubMed

    Wood, Christopher W; Woolfson, Derek N

    2018-01-01

    The increased availability of user-friendly and accessible computational tools for biomolecular modeling would expand the reach and application of biomolecular engineering and design. For protein modeling, one key challenge is to reduce the complexities of 3D protein folds to sets of parametric equations that nonetheless capture the salient features of these structures accurately. At present, this is possible for a subset of proteins, namely, repeat proteins. The α-helical coiled coil provides one such example, which represents ≈ 3-5% of all known protein-encoding regions of DNA. Coiled coils are bundles of α helices that can be described by a small set of structural parameters. Here we describe how this parametric description can be implemented in an easy-to-use web application, called CCBuilder 2.0, for modeling and optimizing both α-helical coiled coils and polyproline-based collagen triple helices. This has many applications from providing models to aid molecular replacement for X-ray crystallography, in silico model building and engineering of natural and designed protein assemblies, and through to the creation of completely de novo "dark matter" protein structures. CCBuilder 2.0 is available as a web-based application, the code for which is open-source and can be downloaded freely. http://coiledcoils.chm.bris.ac.uk/ccbuilder2. We have created CCBuilder 2.0, an easy to use web-based application that can model structures for a whole class of proteins, the α-helical coiled coil, which is estimated to account for 3-5% of all proteins in nature. CCBuilder 2.0 will be of use to a large number of protein scientists engaged in fundamental studies, such as protein structure determination, through to more-applied research including designing and engineering novel proteins that have potential applications in biotechnology. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  15. Model My Watershed and BiG CZ Data Portal: Interactive geospatial analysis and hydrological modeling web applications that leverage the Amazon cloud for scientists, resource managers and students

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Tarboton, D. G.; Sazib, N. S.; Horsburgh, J. S.; Cheetham, R.

    2016-12-01

    The Model My Watershed Web app (http://wikiwatershed.org/model/) was designed to enable citizens, conservation practitioners, municipal decision-makers, educators, and students to interactively select any area of interest anywhere in the continental USA to: (1) analyze real land use and soil data for that area; (2) model stormwater runoff and water-quality outcomes; and (3) compare how different conservation or development scenarios could modify runoff and water quality. The BiG CZ Data Portal is a web application for scientists for intuitive, high-performance map-based discovery, visualization, access and publication of diverse earth and environmental science data via a map-based interface that simultaneously performs geospatial analysis of selected GIS and satellite raster data for a selected area of interest. The two web applications share a common codebase (https://github.com/WikiWatershed and https://github.com/big-cz), high performance geospatial analysis engine (http://geotrellis.io/ and https://github.com/geotrellis) and deployment on the Amazon Web Services (AWS) cloud cyberinfrastructure. Users can use "on-the-fly" rapid watershed delineation over the national elevation model to select their watershed or catchment of interest. The two web applications also share the goal of enabling the scientists, resource managers and students alike to share data, analyses and model results. We will present these functioning web applications and their potential to substantially lower the bar for studying and understanding our water resources. We will also present work in progress, including a prototype system for enabling citizen-scientists to register open-source sensor stations (http://envirodiy.org/mayfly/) to stream data into these systems, so that they can be reshared using Water One Flow web services.

  16. CrossTalk: The Journal of Defense Software Engineering. Volume 20, Number 9, September 2007

    DTIC Science & Technology

    2007-09-01

    underlying application framework, e.g., Java Enter- prise Edition or .NET. This increases the risk that consumer Web services not based on the same...weaknesses and vulnera- bilities that are targeted by attackers and malicious code. For example, Apache Axis 2 enables a Java devel- oper to simply...load his/her Java objects into the Axis SOAP engine. At runtime, it is the SOAP engine that determines which incoming SOAP request messages should be

  17. phpMs: A PHP-Based Mass Spectrometry Utilities Library.

    PubMed

    Collins, Andrew; Jones, Andrew R

    2018-03-02

    The recent establishment of cloud computing, high-throughput networking, and more versatile web standards and browsers has led to a renewed interest in web-based applications. While traditionally big data has been the domain of optimized desktop and server applications, it is now possible to store vast amounts of data and perform the necessary calculations offsite in cloud storage and computing providers, with the results visualized in a high-quality cross-platform interface via a web browser. There are number of emerging platforms for cloud-based mass spectrometry data analysis; however, there is limited pre-existing code accessible to web developers, especially for those that are constrained to a shared hosting environment where Java and C applications are often forbidden from use by the hosting provider. To remedy this, we provide an open-source mass spectrometry library for one of the most commonly used web development languages, PHP. Our new library, phpMs, provides objects for storing and manipulating spectra and identification data as well as utilities for file reading, file writing, calculations, peptide fragmentation, and protein digestion as well as a software interface for controlling search engines. We provide a working demonstration of some of the capabilities at http://pgb.liv.ac.uk/phpMs .

  18. Ontology construction and application in practice case study of health tourism in Thailand.

    PubMed

    Chantrapornchai, Chantana; Choksuchat, Chidchanok

    2016-01-01

    Ontology is one of the key components in semantic webs. It contains the core knowledge for an effective search. However, building ontology requires the carefully-collected knowledge which is very domain-sensitive. In this work, we present the practice of ontology construction for a case study of health tourism in Thailand. The whole process follows the METHONTOLOGY approach, which consists of phases: information gathering, corpus study, ontology engineering, evaluation, publishing, and the application construction. Different sources of data such as structure web documents like HTML and other documents are acquired in the information gathering process. The tourism corpora from various tourism texts and standards are explored. The ontology is evaluated in two aspects: automatic reasoning using Pellet, and RacerPro, and the questionnaires, used to evaluate by experts of the domains: tourism domain experts and ontology experts. The ontology usability is demonstrated via the semantic web application and via example axioms. The developed ontology is actually the first health tourism ontology in Thailand with the published application.

  19. A study of medical and health queries to web search engines.

    PubMed

    Spink, Amanda; Yang, Yin; Jansen, Jim; Nykanen, Pirrko; Lorence, Daniel P; Ozmutlu, Seda; Ozmutlu, H Cenk

    2004-03-01

    This paper reports findings from an analysis of medical or health queries to different web search engines. We report results: (i). comparing samples of 10000 web queries taken randomly from 1.2 million query logs from the AlltheWeb.com and Excite.com commercial web search engines in 2001 for medical or health queries, (ii). comparing the 2001 findings from Excite and AlltheWeb.com users with results from a previous analysis of medical and health related queries from the Excite Web search engine for 1997 and 1999, and (iii). medical or health advice-seeking queries beginning with the word 'should'. Findings suggest: (i). a small percentage of web queries are medical or health related, (ii). the top five categories of medical or health queries were: general health, weight issues, reproductive health and puberty, pregnancy/obstetrics, and human relationships, and (iii). over time, the medical and health queries may have declined as a proportion of all web queries, as the use of specialized medical/health websites and e-commerce-related queries has increased. Findings provide insights into medical and health-related web querying and suggests some implications for the use of the general web search engines when seeking medical/health information.

  20. Web Search Studies: Multidisciplinary Perspectives on Web Search Engines

    NASA Astrophysics Data System (ADS)

    Zimmer, Michael

    Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.

  1. Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.

    PubMed

    Troshin, Peter V; Procter, James B; Barton, Geoffrey J

    2011-07-15

    JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.

  2. Web Spam, Social Propaganda and the Evolution of Search Engine Rankings

    NASA Astrophysics Data System (ADS)

    Metaxas, Panagiotis Takis

    Search Engines have greatly influenced the way we experience the web. Since the early days of the web, users have been relying on them to get informed and make decisions. When the web was relatively small, web directories were built and maintained using human experts to screen and categorize pages according to their characteristics. By the mid 1990's, however, it was apparent that the human expert model of categorizing web pages does not scale. The first search engines appeared and they have been evolving ever since, taking over the role that web directories used to play.

  3. The emerging Web 2.0 social software: an enabling suite of sociable technologies in health and health care education.

    PubMed

    Kamel Boulos, Maged N; Wheeler, Steve

    2007-03-01

    Web 2.0 sociable technologies and social software are presented as enablers in health and health care, for organizations, clinicians, patients and laypersons. They include social networking services, collaborative filtering, social bookmarking, folksonomies, social search engines, file sharing and tagging, mashups, instant messaging, and online multi-player games. The more popular Web 2.0 applications in education, namely wikis, blogs and podcasts, are but the tip of the social software iceberg. Web 2.0 technologies represent a quite revolutionary way of managing and repurposing/remixing online information and knowledge repositories, including clinical and research information, in comparison with the traditional Web 1.0 model. The paper also offers a glimpse of future software, touching on Web 3.0 (the Semantic Web) and how it could be combined with Web 2.0 to produce the ultimate architecture of participation. Although the tools presented in this review look very promising and potentially fit for purpose in many health care applications and scenarios, careful thinking, testing and evaluation research are still needed in order to establish 'best practice models' for leveraging these emerging technologies to boost our teaching and learning productivity, foster stronger 'communities of practice', and support continuing medical education/professional development (CME/CPD) and patient education.

  4. Deep Web video

    ScienceCinema

    None Available

    2018-02-06

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  5. Comparison of Physics Frameworks for WebGL-Based Game Engine

    NASA Astrophysics Data System (ADS)

    Yogya, Resa; Kosala, Raymond

    2014-03-01

    Recently, a new technology called WebGL shows a lot of potentials for developing games. However since this technology is still new, there are still many potentials in the game development area that are not explored yet. This paper tries to uncover the potential of integrating physics frameworks with WebGL technology in a game engine for developing 2D or 3D games. Specifically we integrated three open source physics frameworks: Bullet, Cannon, and JigLib into a WebGL-based game engine. Using experiment, we assessed these frameworks in terms of their correctness or accuracy, performance, completeness and compatibility. The results show that it is possible to integrate open source physics frameworks into a WebGLbased game engine, and Bullet is the best physics framework to be integrated into the WebGL-based game engine.

  6. A browser-based event display for the CMS experiment at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hategan, M.; McCauley, T.; Nguyen, P.

    2012-01-01

    The line between native and web applications is becoming increasingly blurred as modern web browsers are becoming powerful platforms on which applications can be run. Such applications are trivial to install and are readily extensible and easy to use. In an educational setting, web applications permit a way to deploy deploy tools in a highly-restrictive computing environment. The I2U2 collaboration has developed a browser-based event display for viewing events in data collected and released to the public by the CMS experiment at the LHC. The application itself reads a JSON event format and uses the JavaScript 3D rendering engine pre3d.more » The only requirement is a modern browser using HTML5 canvas. The event display has been used by thousands of high school students in the context of programs organized by I2U2, QuarkNet, and IPPOG. This browser-based approach to display of events can have broader usage and impact for experts and public alike.« less

  7. Using the USGS Seismic Risk Web Application to estimate aftershock damage

    USGS Publications Warehouse

    McGowan, Sean M.; Luco, Nicolas

    2014-01-01

    The U.S. Geological Survey (USGS) Engineering Risk Assessment Project has developed the Seismic Risk Web Application to combine earthquake hazard and structural fragility information in order to calculate the risk of earthquake damage to structures. Enabling users to incorporate their own hazard and fragility information into the calculations will make it possible to quantify (in near real-time) the risk of additional damage to structures caused by aftershocks following significant earthquakes. Results can quickly be shared with stakeholders to illustrate the impact of elevated ground motion hazard and earthquake-compromised structural integrity on the risk of damage during a short-term, post-earthquake time horizon.

  8. Quantitative evaluation of recall and precision of CAT Crawler, a search engine specialized on retrieval of Critically Appraised Topics.

    PubMed

    Dong, Peng; Wong, Ling Ling; Ng, Sarah; Loh, Marie; Mondry, Adrian

    2004-12-10

    Critically Appraised Topics (CATs) are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM). The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. While achieving an identical recall, the meta-search engine showed a precision of 77.26% (+/-14.45) compared to the individual search engines' 52.65% (+/-12.0) (p < 0.001). The results demonstrate the validity of the CAT Crawler meta-search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians.

  9. Quantitative evaluation of recall and precision of CAT Crawler, a search engine specialized on retrieval of Critically Appraised Topics

    PubMed Central

    Dong, Peng; Wong, Ling Ling; Ng, Sarah; Loh, Marie; Mondry, Adrian

    2004-01-01

    Background Critically Appraised Topics (CATs) are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM). The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. Methods A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. Results While achieving an identical recall, the meta-search engine showed a precision of 77.26% (±14.45) compared to the individual search engines' 52.65% (±12.0) (p < 0.001). Conclusion The results demonstrate the validity of the CAT Crawler meta-search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians. PMID:15588311

  10. Information Clustering Based on Fuzzy Multisets.

    ERIC Educational Resources Information Center

    Miyamoto, Sadaaki

    2003-01-01

    Proposes a fuzzy multiset model for information clustering with application to information retrieval on the World Wide Web. Highlights include search engines; term clustering; document clustering; algorithms for calculating cluster centers; theoretical properties concerning clustering algorithms; and examples to show how the algorithms work.…

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None Available

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  12. Creating a Classroom Kaleidoscope with the World Wide Web.

    ERIC Educational Resources Information Center

    Quinlan, Laurie A.

    1997-01-01

    Discusses the elements of classroom Web presentations: planning; construction, including design tips; classroom use; and assessment. Lists 14 World Wide Web resources for K-12 teachers; Internet search tools (directories, search engines and meta-search engines); a Web glossary; and an example of HTML for a simple Web page. (PEN)

  13. EntrezAJAX: direct web browser access to the Entrez Programming Utilities

    PubMed Central

    2010-01-01

    Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/ PMID:20565938

  14. The Evolution of the DARWIN System

    NASA Technical Reports Server (NTRS)

    Walton, Joan D.; Filman, Robert E.; Korsmeyer, David J.; Norvig, Peter (Technical Monitor)

    1999-01-01

    DARWIN is a web-based system for presenting the results of wind-tunnel testing and computational model analyses to aerospace designers. DARWIN captures the data, maintains the information, and manages derived knowledge (e.g. visualizations, etc.) of large quantities of aerospace data. In addition, it provides tools and an environment for distributed collaborative engineering. We are currently constructing the third version of the DARWIN software system. DARWN's development history has, in some sense, tracked the development of web applications. The 1995 DARWIN reflected the latest web technologies--CGI scripts, Java applets and a three-layer architecture--available at that time. The 1997 version of DARWIN expanded on this base, making extensive use of a plethora of web technologies, including Java/JavaScript and Dynamic HTML. While more powerful, this multiplicity has proven to be a maintenance and development headache. The year 2000 version of DARWIN will provide a more stable and uniform foundation environment, composed primarily of Java mechanisms. In this paper, we discuss this evolution, comparing the strengths and weaknesses of the various architectural approaches and describing the lessons learned about building complex web applications.

  15. The Evolution of Web Searching.

    ERIC Educational Resources Information Center

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  16. Development of a metal-clad advanced composite shear web design concept

    NASA Technical Reports Server (NTRS)

    Laakso, J. H.

    1974-01-01

    An advanced composite web concept was developed for potential application to the Space Shuttle Orbiter main engine thrust structure. The program consisted of design synthesis, analysis, detail design, element testing, and large scale component testing. A concept was sought that offered significant weight saving by the use of Boron/Epoxy (B/E) reinforced titanium plate structure. The desired concept was one that was practical and that utilized metal to efficiently improve structural reliability. The resulting development of a unique titanium-clad B/E shear web design concept is described. Three large scale components were fabricated and tested to demonstrate the performance of the concept: a titanium-clad plus or minus 45 deg B/E web laminate stiffened with vertical B/E reinforced aluminum stiffeners.

  17. Electrochromic properties of polyaniline-coated fiber webs for tissue engineering applications.

    PubMed

    Beregoi, Mihaela; Busuioc, Cristina; Evanghelidis, Alexandru; Matei, Elena; Iordache, Florin; Radu, Mihaela; Dinischiotu, Anca; Enculescu, Ionut

    2016-08-30

    By combining the electrospinning method advantages (high surface-to-volume ratio, controlled morphology, varied composition and flexibility for the resulting structures) with the electrical activity of polyaniline, a new core-shell-type material with potential applications in the field of artificial muscles was synthesized. Thus, a poly(methylmethacrylate) solution was electrospun in optimized conditions to obtain randomly oriented polymer fiber webs. Further, a gold layer was sputtered on their surface in order to make them conductive and improve the mechanical properties. The metalized fiber webs were then covered with a PANI layer by in situ electrochemical polymerization starting from aniline and using sulphuric acid as oxidizing agent. By applying a small voltage on PANI-coated fiber webs in the presence of an electrolyte, the oxidation state of PANI changes, which is followed by the device color modification. The morphological, electrical and biological properties of the resulting multilayered material were also investigated. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Dark Web 101

    DTIC Science & Technology

    2016-07-21

    Todays internet has multiple webs. The surface web is what Google and other search engines index and pull based on links. Essentially, the surface...financial records, research and development), and personal data (medical records or legal documents). These are all deep web. Standard search engines dont

  19. TouchTerrain: A simple web-tool for creating 3D-printable topographic models

    NASA Astrophysics Data System (ADS)

    Hasiuk, Franciszek J.; Harding, Chris; Renner, Alex Raymond; Winer, Eliot

    2017-12-01

    An open-source web-application, TouchTerrain, was developed to simplify the production of 3D-printable terrain models. Direct Digital Manufacturing (DDM) using 3D Printers can change how geoscientists, students, and stakeholders interact with 3D data, with the potential to improve geoscience communication and environmental literacy. No other manufacturing technology can convert digital data into tangible objects quickly at relatively low cost; however, the expertise necessary to produce a 3D-printed terrain model can be a substantial burden: knowledge of geographical information systems, computer aided design (CAD) software, and 3D printers may all be required. Furthermore, printing models larger than the build volume of a 3D printer can pose further technical hurdles. The TouchTerrain web-application simplifies DDM for elevation data by generating digital 3D models customized for a specific 3D printer's capabilities. The only required user input is the selection of a region-of-interest using the provided web-application with a Google Maps-style interface. Publically available digital elevation data is processed via the Google Earth Engine API. To allow the manufacture of 3D terrain models larger than a 3D printer's build volume the selected area can be split into multiple tiles without third-party software. This application significantly reduces the time and effort required for a non-expert like an educator to obtain 3D terrain models for use in class. The web application is deployed at http://touchterrain.geol.iastate.edu/.

  20. Web Feet Guide to Search Engines: Finding It on the Net.

    ERIC Educational Resources Information Center

    Web Feet, 2001

    2001-01-01

    This guide to search engines for the World Wide Web discusses selecting the right search engine; interpreting search results; major search engines; online tutorials and guides; search engines for kids; specialized search tools for various subjects; and other specialized engines and gateways. (LRW)

  1. Constructive Ontology Engineering

    ERIC Educational Resources Information Center

    Sousan, William L.

    2010-01-01

    The proliferation of the Semantic Web depends on ontologies for knowledge sharing, semantic annotation, data fusion, and descriptions of data for machine interpretation. However, ontologies are difficult to create and maintain. In addition, their structure and content may vary depending on the application and domain. Several methods described in…

  2. Accelerating North American rangeland conservation with earth observation data and user driven web applications.

    NASA Astrophysics Data System (ADS)

    Allred, B. W.; Naugle, D.; Donnelly, P.; Tack, J.; Jones, M. O.

    2016-12-01

    In 2010, the USDA Natural Resources Conservation Service (NRCS) launched the Sage Grouse Initiative (SGI) to voluntarily reduce threats facing sage-grouse and rangelands on private lands. Over the past five years, SGI has matured into a primary catalyst for rangeland and wildlife conservation across the North American west, focusing on the shared vision of wildlife conservation through sustainable working landscapes and providing win-win solutions for producers, sage grouse, and 350 other sagebrush obligate species. SGI and its partners have invested a total of $750 million into rangeland and wildlife conservation. Moving forward, SGI continues to focus on rangeland conservation. Partnering with Google Earth Engine, SGI has developed outcome monitoring and conservation planning tools at continental scales. The SGI science team is currently developing assessment and monitoring algorithms of key conservation indicators. The SGI web application utilizes Google Earth Engine for user defined analysis and planning, putting the appropriate information directly into the hands of managers and conservationists.

  3. Dynamics of a macroscopic model characterizing mutualism of search engines and web sites

    NASA Astrophysics Data System (ADS)

    Wang, Yuanshi; Wu, Hong

    2006-05-01

    We present a model to describe the mutualism relationship between search engines and web sites. In the model, search engines and web sites benefit from each other while the search engines are derived products of the web sites and cannot survive independently. Our goal is to show strategies for the search engines to survive in the internet market. From mathematical analysis of the model, we show that mutualism does not always result in survival. We show various conditions under which the search engines would tend to extinction, persist or grow explosively. Then by the conditions, we deduce a series of strategies for the search engines to survive in the internet market. We present conditions under which the initial number of consumers of the search engines has little contribution to their persistence, which is in agreement with the results in previous works. Furthermore, we show novel conditions under which the initial value plays an important role in the persistence of the search engines and deduce new strategies. We also give suggestions for the web sites to cooperate with the search engines in order to form a win-win situation.

  4. IntegromeDB: an integrated system and biological search engine.

    PubMed

    Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia

    2012-01-19

    With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.

  5. Planning and Management of Real-Time Geospatialuas Missions Within a Virtual Globe Environment

    NASA Astrophysics Data System (ADS)

    Nebiker, S.; Eugster, H.; Flückiger, K.; Christen, M.

    2011-09-01

    This paper presents the design and development of a hardware and software framework supporting all phases of typical monitoring and mapping missions with mini and micro UAVs (unmanned aerial vehicles). The developed solution combines state-of-the art collaborative virtual globe technologies with advanced geospatial imaging techniques and wireless data link technologies supporting the combined and highly reliable transmission of digital video, high-resolution still imagery and mission control data over extended operational ranges. The framework enables the planning, simulation, control and real-time monitoring of UAS missions in application areas such as monitoring of forest fires, agronomical research, border patrol or pipeline inspection. The geospatial components of the project are based on the Virtual Globe Technology i3D OpenWebGlobe of the Institute of Geomatics Engineering at the University of Applied Sciences Northwestern Switzerland (FHNW). i3D OpenWebGlobe is a high-performance 3D geovisualisation engine supporting the web-based streaming of very large amounts of terrain and POI data.

  6. Indexing and Retrieval for the Web.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    2003-01-01

    Explores current research on indexing and ranking as retrieval functions of search engines on the Web. Highlights include measuring search engine stability; evaluation of Web indexing and retrieval; Web crawlers; hyperlinks for indexing and ranking; ranking for metasearch; document structure; citation indexing; relevance; query evaluation;…

  7. A resource-oriented architecture for a Geospatial Web

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Nativi, Stefano

    2010-05-01

    In this presentation we discuss some architectural issues on the design of an architecture for a Geospatial Web, that is an information system for sharing geospatial resources according to the Web paradigm. The success of the Web in building a multi-purpose information space, has raised questions about the possibility of adopting the same approach for systems dedicated to the sharing of more specific resources, such as the geospatial information, that is information characterized by spatial/temporal reference. To this aim an investigation on the nature of the Web and on the validity of its paradigm for geospatial resources is required. The Web was born in the early 90's to provide "a shared information space through which people and machines could communicate" [Berners-Lee 1996]. It was originally built around a small set of specifications (e.g. URI, HTTP, HTML, etc.); however, in the last two decades several other technologies and specifications have been introduced in order to extend its capabilities. Most of them (e.g. the SOAP family) actually aimed to transform the Web in a generic Distributed Computing Infrastructure. While these efforts were definitely successful enabling the adoption of service-oriented approaches for machine-to-machine interactions supporting complex business processes (e.g. for e-Government and e-Business applications), they do not fit in the original concept of the Web. In the year 2000, R. T. Fielding, one of the designers of the original Web specifications, proposes a new architectural style for distributed systems, called REST (Representational State Transfer), aiming to capture the fundamental characteristics of the Web as it was originally conceived [Fielding 2000]. In this view, the nature of the Web lies not so much in the technologies, as in the way they are used. Maintaining the Web architecture conform to the REST style would then assure the scalability, extensibility and low entry barrier of the original Web. On the contrary, systems using the same Web technologies and specifications but according to a different architectural style, despite their usefulness, should not be considered part of the Web. If the REST style captures the significant Web characteristics, then, in order to build a Geospatial Web it is necessary that its architecture satisfies all the REST constraints. One of them is of particular importance: the adoption of a Uniform Interface. It prescribes that all the geospatial resources must be accessed through the same interface; moreover according to the REST style this interface must satisfy four further constraints: a) identification of resources; b) manipulation of resources through representations; c) self-descriptive messages; and, d) hypermedia as the engine of application state. In the Web, the uniform interface provides basic operations which are meaningful for generic resources. They typically implement the CRUD pattern (Create-Retrieve-Update-Delete) which demonstrated to be flexible and powerful in several general-purpose contexts (e.g. filesystem management, SQL for database management systems, etc.). Restricting the scope to a subset of resources it would be possible to identify other generic actions which are meaningful for all of them. For example for geospatial resources, subsetting, resampling, interpolation and coordinate reference systems transformations functionalities are candidate functionalities for a uniform interface. However an investigation is needed to clarify the semantics of those actions for different resources, and consequently if they can really ascend the role of generic interface operation. Concerning the point a), (identification of resources), it is required that every resource addressable in the Geospatial Web has its own identifier (e.g. a URI). This allows to implement citation and re-use of resources, simply providing the URI. OPeNDAP and KVP encodings of OGC data access services specifications might provide a basis for it. Concerning point b) (manipulation of resources through representations), the Geospatial Web poses several issues. In fact, while the Web mainly handles semi-structured information, in the Geospatial Web the information is typically structured with several possible data models (e.g. point series, gridded coverages, trajectories, etc.) and encodings. A possibility would be to simplify the interchange formats, choosing to support a subset of data models and format(s). This is what actually the Web designers did choosing to define a common format for hypermedia (HTML), although the underlying protocol would be generic. Concerning point c), self-descriptive messages, the exchanged messages should describe themselves and their content. This would not be actually a major issue considering the effort put in recent years on geospatial metadata models and specifications. The point d), hypermedia as the engine of application state, is actually where the Geospatial Web would mainly differ from existing geospatial information sharing systems. In fact the existing systems typically adopt a service-oriented architecture, where applications are built as a single service or as a workflow of services. On the other hand, in the Geospatial Web, applications should be built following the path between interconnected resources. The link between resources should be made explicit as hyperlinks. The adoption of Semantic Web solutions would allow to define not only the existence of a link between two resources, but also the nature of the link. The implementation of a Geospatial Web would allow to build an information system with the same characteristics of the Web sharing its points-of-strength and weaknesses. The main advantages would be the following: • The user would interact with the Geospatial Web according to the well-known Web navigation paradigm. This would lower the barrier to the access to geospatial applications for non-specialists (e.g. the success of Google Maps and other Web mapping applications); • Successful Web and Web 2.0 applications - search engines, feeds, social network - could be integrated/replicated in the Geospatial Web; The main drawbacks would be the following: • The Uniform Interface simplifies the overall system architecture (e.g. no service registry, and service descriptors required), but moves the complexity to the data representation. Moreover since the interface must stay generic, it results really simple and therefore complex interactions would require several transfers. • In the geospatial domain one of the most valuable resources are processes (e.g. environmental models). How they can be modeled as resources accessed through the common interface is an open issue. Taking into account advantages and drawback it seems that a Geospatial Web would be useful, but its use would be limited to specific use-cases not covering all the possible applications. The Geospatial Web architecture could be partly based on existing specifications, while other aspects need investigation. References [Berners-Lee 1996] T. Berners-Lee, "WWW: Past, present, and future". IEEE Computer, 29(10), Oct. 1996, pp. 69-77. [Fielding 2000] Fielding, R. T. 2000. Architectural styles and the design of network-based software architectures. PhD Dissertation. Dept. of Information and Computer Science, University of California, Irvine

  8. An Ontology for Insider Threat Indicators Development and Applications

    DTIC Science & Technology

    2014-11-01

    An Ontology for Insider Threat Indicators Development and Applications Daniel L. Costa , Matthew L. Collins, Samuel J. Perl, Michael J. Albrethsen...services, commit fraud against an organization, steal intellectual property, or conduct national security espionage, sabotaging systems and data, as...engineering plans from the victim organization’s computer systems to his new employer.  The insider accessed a web server with an administrator account

  9. BPELPower—A BPEL execution engine for geospatial web services

    NASA Astrophysics Data System (ADS)

    Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi

    2012-10-01

    The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.

  10. Space Physics Data Facility Web Services

    NASA Technical Reports Server (NTRS)

    Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.

    2005-01-01

    The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.

  11. WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets

    NASA Astrophysics Data System (ADS)

    Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.

    2010-12-01

    WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface of the web application. These features are all in addition to a full range of essential visualization functions including 3-D camera and object orientation, position manipulation, time-stepping control, and custom color/alpha mapping.

  12. Application of Multiprotocol Medical Imaging Communications and an Extended DICOM WADO Service in a Teleradiology Architecture

    PubMed Central

    Koutelakis, George V.; Anastassopoulos, George K.; Lymberopoulos, Dimitrios K.

    2012-01-01

    Multiprotocol medical imaging communication through the Internet is more flexible than the tight DICOM transfers. This paper introduces a modular multiprotocol teleradiology architecture that integrates DICOM and common Internet services (based on web, FTP, and E-mail) into a unique operational domain. The extended WADO service (a web extension of DICOM) and the other proposed services allow access to all levels of the DICOM information hierarchy as opposed to solely Object level. A lightweight client site is considered adequate, because the server site of the architecture provides clients with service interfaces through the web as well as invulnerable space for temporary storage, called as User Domains, so that users fulfill their applications' tasks. The proposed teleradiology architecture is pilot implemented using mainly Java-based technologies and is evaluated by engineers in collaboration with doctors. The new architecture ensures flexibility in access, user mobility, and enhanced data security. PMID:22489237

  13. IntegromeDB: an integrated system and biological search engine

    PubMed Central

    2012-01-01

    Background With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Description Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. Conclusions The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback. PMID:22260095

  14. Harnessing modern web application technology to create intuitive and efficient data visualization and sharing tools.

    PubMed

    Wood, Dylan; King, Margaret; Landis, Drew; Courtney, William; Wang, Runtang; Kelly, Ross; Turner, Jessica A; Calhoun, Vince D

    2014-01-01

    Neuroscientists increasingly need to work with big data in order to derive meaningful results in their field. Collecting, organizing and analyzing this data can be a major hurdle on the road to scientific discovery. This hurdle can be lowered using the same technologies that are currently revolutionizing the way that cultural and social media sites represent and share information with their users. Web application technologies and standards such as RESTful webservices, HTML5 and high-performance in-browser JavaScript engines are being utilized to vastly improve the way that the world accesses and shares information. The neuroscience community can also benefit tremendously from these technologies. We present here a web application that allows users to explore and request the complex datasets that need to be shared among the neuroimaging community. The COINS (Collaborative Informatics and Neuroimaging Suite) Data Exchange uses web application technologies to facilitate data sharing in three phases: Exploration, Request/Communication, and Download. This paper will focus on the first phase, and how intuitive exploration of large and complex datasets is achieved using a framework that centers around asynchronous client-server communication (AJAX) and also exposes a powerful API that can be utilized by other applications to explore available data. First opened to the neuroscience community in August 2012, the Data Exchange has already provided researchers with over 2500 GB of data.

  15. Harnessing modern web application technology to create intuitive and efficient data visualization and sharing tools

    PubMed Central

    Wood, Dylan; King, Margaret; Landis, Drew; Courtney, William; Wang, Runtang; Kelly, Ross; Turner, Jessica A.; Calhoun, Vince D.

    2014-01-01

    Neuroscientists increasingly need to work with big data in order to derive meaningful results in their field. Collecting, organizing and analyzing this data can be a major hurdle on the road to scientific discovery. This hurdle can be lowered using the same technologies that are currently revolutionizing the way that cultural and social media sites represent and share information with their users. Web application technologies and standards such as RESTful webservices, HTML5 and high-performance in-browser JavaScript engines are being utilized to vastly improve the way that the world accesses and shares information. The neuroscience community can also benefit tremendously from these technologies. We present here a web application that allows users to explore and request the complex datasets that need to be shared among the neuroimaging community. The COINS (Collaborative Informatics and Neuroimaging Suite) Data Exchange uses web application technologies to facilitate data sharing in three phases: Exploration, Request/Communication, and Download. This paper will focus on the first phase, and how intuitive exploration of large and complex datasets is achieved using a framework that centers around asynchronous client-server communication (AJAX) and also exposes a powerful API that can be utilized by other applications to explore available data. First opened to the neuroscience community in August 2012, the Data Exchange has already provided researchers with over 2500 GB of data. PMID:25206330

  16. Faculty Recommendations for Web Tools: Implications for Course Management Systems

    ERIC Educational Resources Information Center

    Oliver, Kevin; Moore, John

    2008-01-01

    A gap analysis of web tools in Engineering was undertaken as one part of the Digital Library Network for Engineering and Technology (DLNET) grant funded by NSF (DUE-0085849). DLNET represents a Web portal and an online review process to archive quality knowledge objects in Engineering and Technology disciplines. The gap analysis coincided with the…

  17. [Multi-course web-learning system for supporting students of medical technology].

    PubMed

    Honma, Satoru; Wakamatsu, Hidetoshi; Kurihara, Yuriko; Yoshida, Shoko; Sakai, Nobue

    2013-05-01

    Web-Learning system was developed to support the self-learning for national qualification examination and medical engineering practice by students. The results from small tests in various situations suggest that the unit-learning systems are more effective, especially for the early stage of their self learning. In addition, the answers of some questionnaire suggest that the students' motivation has a certain relation with the number of the questions in the system. That is, the less number of the questions, the easier they are worked out with a higher learning motivation by students. Thus, the system was extended to enable students to study various subjects and/or units by themselves. The system enables them to have learning effects more easily by the exercise during lectures. The effectiveness of the system was investigated on medical associated subjects installed in the system. The concerning questions of Medical engineering and Pathological histology are adequately divided into several groups, of which sixteen Web-Learning subsystems were well composed for their practical application. Our concerning various unit-learning systems were confirmed much useful for most students comparing with the case of the overall Web-Learning system.

  18. Helping Students Choose Tools To Search the Web.

    ERIC Educational Resources Information Center

    Cohen, Laura B.; Jacobson, Trudi E.

    2000-01-01

    Describes areas where faculty members can aid students in making intelligent use of the Web in their research. Differentiates between subject directories and search engines. Describes an engine's three components: spider, index, and search engine. Outlines two misconceptions: that Yahoo! is a search engine and that search engines contain all the…

  19. Search Engines: Gateway to a New ``Panopticon''?

    NASA Astrophysics Data System (ADS)

    Kosta, Eleni; Kalloniatis, Christos; Mitrou, Lilian; Kavakli, Evangelia

    Nowadays, Internet users are depending on various search engines in order to be able to find requested information on the Web. Although most users feel that they are and remain anonymous when they place their search queries, reality proves otherwise. The increasing importance of search engines for the location of the desired information on the Internet usually leads to considerable inroads into the privacy of users. The scope of this paper is to study the main privacy issues with regard to search engines, such as the anonymisation of search logs and their retention period, and to examine the applicability of the European data protection legislation to non-EU search engine providers. Ixquick, a privacy-friendly meta search engine will be presented as an alternative to privacy intrusive existing practices of search engines.

  20. Computer Applications in Balancing Chemical Equations.

    ERIC Educational Resources Information Center

    Kumar, David D.

    2001-01-01

    Discusses computer-based approaches to balancing chemical equations. Surveys 13 methods, 6 based on matrix, 2 interactive programs, 1 stand-alone system, 1 developed in algorithm in Basic, 1 based on design engineering, 1 written in HyperCard, and 1 prepared for the World Wide Web. (Contains 17 references.) (Author/YDS)

  1. Result Merging Strategies for a Current News Metasearcher.

    ERIC Educational Resources Information Center

    Rasolofo, Yves; Hawking, David; Savoy, Jacques

    2003-01-01

    Metasearching of online current news services is a potentially useful Web application of distributed information retrieval techniques. Reports experiences in building a metasearcher designed to provide up-to-date searching over a significant number of rapidly changing current news sites, focusing on how to merge results from the search engines at…

  2. Deploying the Win TR-20 computational engine as a web service

    USDA-ARS?s Scientific Manuscript database

    Despite its simplicity and limitations, the runoff curve number method remains a widely-used hydrologic modeling tool, and its use through the USDA Natural Resources Conservation Service (NRCS) computer application WinTR-20 is expected to continue for the foreseeable future. To facilitate timely up...

  3. Towards Agile Ontology Maintenance

    NASA Astrophysics Data System (ADS)

    Luczak-Rösch, Markus

    Ontologies are an appropriate means to represent knowledge on the Web. Research on ontology engineering reached practices for an integrative lifecycle support. However, a broader success of ontologies in Web-based information systems remains unreached while the more lightweight semantic approaches are rather successful. We assume, paired with the emerging trend of services and microservices on the Web, new dynamic scenarios gain momentum in which a shared knowledge base is made available to several dynamically changing services with disparate requirements. Our work envisions a step towards such a dynamic scenario in which an ontology adapts to the requirements of the accessing services and applications as well as the user's needs in an agile way and reduces the experts' involvement in ontology maintenance processes.

  4. Non-invasive lightweight integration engine for building EHR from autonomous distributed systems.

    PubMed

    Angulo, Carlos; Crespo, Pere; Maldonado, José A; Moner, David; Pérez, Daniel; Abad, Irene; Mandingorra, Jesús; Robles, Montserrat

    2007-12-01

    In this paper we describe Pangea-LE, a message-oriented lightweight data integration engine that allows homogeneous and concurrent access to clinical information from disperse and heterogeneous data sources. The engine extracts the information and passes it to the requesting client applications in a flexible XML format. The XML response message can be formatted on demand by appropriate Extensible Stylesheet Language (XSL) transformations in order to meet the needs of client applications. We also present a real deployment in a hospital where Pangea-LE collects and generates an XML view of all the available patient clinical information. The information is presented to healthcare professionals in an Electronic Health Record (EHR) viewer Web application with patient search and EHR browsing capabilities. Implantation in a real setting has been a success due to the non-invasive nature of Pangea-LE which respects the existing information systems.

  5. Non-invasive light-weight integration engine for building EHR from autonomous distributed systems.

    PubMed

    Crespo Molina, Pere; Angulo Fernández, Carlos; Maldonado Segura, José A; Moner Cano, David; Robles Viejo, Montserrat

    2006-01-01

    Pangea-LE is a message oriented light-weight integration engine, allowing concurrent access to clinical information from disperse and heterogeneous data sources. The engine extracts the information and serves it to the requester client applications in a flexible XML format. This XML response message can be formatted on demand by the appropriate XSL (Extensible Stylesheet Language) transformation in order to fit client application needs. In this article we present a real use case sample where Pangea-LE collects and generates "on the fly" a structured view of all the patient clinical information available in a healthcare organisation. This information is presented to healthcare professionals in an EHR (Electronic Health Record) viewer Web application with patient search and EHR browsing capabilities. Implantation in a real environment has been a notable success due to the non-invasive method which extremely respects the existing information systems.

  6. Spatial Audio on the Web: Or Why Can't I hear Anything Over There?

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Schlickenmaier, Herbert (Technical Monitor); Johnson, Gerald (Technical Monitor); Frey, Mary Anne (Technical Monitor); Schneider, Victor S. (Technical Monitor); Ahunada, Albert J. (Technical Monitor)

    1997-01-01

    Auditory complexity, freedom of movement and interactivity is not always possible in a "true" virtual environment, much less in web-based audio. However, a lot of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to spatial audio on the web. My talk will discuss some of these engineering constraints and their perceptual consequences, and attempt to relate these issues to implementation on the web.

  7. Evaluation of a metal shear web selectively reinforced with filamentary composites for space shuttle application. Phase 1 summary report: Shear web design development

    NASA Technical Reports Server (NTRS)

    Laakso, J. H.; Zimmerman, D. K.

    1972-01-01

    An advanced composite shear web design concept was developed for the Space Shuttle orbiter main engine thrust beam structure. Various web concepts were synthesized by a computer-aided adaptive random search procedure. A practical concept is identified having a titanium-clad + or - 45 deg boron/epoxy web plate with vertical boron/epoxy reinforced aluminum stiffeners. The boron-epoxy laminate contributes to the strength and stiffness efficiency of the basic web section. The titanium-cladding functions to protect the polymeric laminate parts from damaging environments and is chem-milled to provide reinforcement in selected areas. Detailed design drawings are presented for both boron/epoxy reinforced and all-metal shear webs. The weight saving offered is 24% relative to all-metal construction at an attractive cost per pound of weight saved, based on the detailed designs. Small scale element tests substantiate the boron/epoxy reinforced design details in critical areas. The results show that the titanium-cladding reliably reinforces the web laminate in critical edge load transfer and stiffener fastener hole areas.

  8. A Connection Model between the Positioning Mechanism and Ultrasonic Measurement System via a Web Browser to Assess Acoustic Target Strength

    NASA Astrophysics Data System (ADS)

    Ishii, Ken; Imaizumi, Tomohito; Abe, Koki; Takao, Yoshimi; Tamura, Shuko

    This paper details a network-controlled measurement system for use in fisheries engineering. The target strength (TS) of fish is important in order to convert acoustic integration values obtained during acoustic surveys into estimates of fish abundance. The target strength pattern is measured with the combination of the rotation system for the aspect of the sample and the echo data acquisition system using the underwater supersonic wave. The user interface of the network architecture is designed for collaborative use with researchers in other organizations. The flexible network architecture is based on the web direct-access model for the rotation mechanism. The user interface is available for monitoring and controlling via a web browser that is installed in any terminal PC (personal computer). Previously the combination of two applications was performed not by a web browser but by the exclusive interface program. So a connection model is proposed between two applications by indirect communication via the DCOM (Distributed Component Object Model) server and added in the web direct-access model. A prompt report system in the TS measurement system and a positioning and measurement system using an electric flatcar via a web browser are developed. By a secure network architecture, DCOM communications via both Intranet and LAN are successfully certificated.

  9. Astronomy and Space Technologies, Photonics Applications and Web Engineering, Wilga, May 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    This paper is the first part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with photonics and electronics applications in astronomy and space technologies. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the Jubilee XXXth SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET tokamak and pi-of-the sky experiments development. The symposium is an annual summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also a great occasion for SPIE, IEEE, OSA and PSP students to meet together in a large group spanning the whole country with guests from this part of Europe. A digest of Wilga references is presented [1-275].

  10. Mobile Timekeeping Application Built on Reverse-Engineered JPL Infrastructure

    NASA Technical Reports Server (NTRS)

    Witoff, Robert J.

    2013-01-01

    Every year, non-exempt employees cumulatively waste over one man-year tracking their time and using the timekeeping Web page to save those times. This app eliminates this waste. The innovation is a native iPhone app. Libraries were built around a reverse- engineered JPL API. It represents a punch-in/punch-out paradigm for timekeeping. It is accessible natively via iPhones, and features ease of access. Any non-exempt employee can natively punch in and out, as well as save and view their JPL timecard. This app is built on custom libraries created by reverse-engineering the standard timekeeping application. Communication is through custom libraries that re-route traffic through BrowserRAS (remote access service). This has value at any center where employees track their time.

  11. An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling.

    PubMed

    Devi, R Suganya; Manjula, D; Siddharth, R K

    2015-01-01

    Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling.

  12. An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling

    PubMed Central

    Devi, R. Suganya; Manjula, D.; Siddharth, R. K.

    2015-01-01

    Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling. PMID:26137592

  13. HydroViz: A web-based hydrologic observatory for enhancing hydrology and earth-science education

    NASA Astrophysics Data System (ADS)

    Habib, E. H.; Ma, Y.; Williams, D.

    2010-12-01

    The main goal of this study is to develop a virtual hydrologic observatory (HydroViz) that integrates hydrologic field observations with numerical simulations by taking advantage of advances in hydrologic field & remote sensing data, computer modeling, scientific visualization, and web resources and internet accessibility. The HydroViz system is a web-based teaching tool that can run on any web browsers. It leverages the strength of Google Earth to provide authentic and hands-on activities to improve learning. Evaluation of the HydroViz was performed in three engineering courses (a senior level course and two Introductory courses at two different universities). Evaluation results indicate that HydroViz provides an improvement over existing engineering hydrology curriculum. HydroViz was effective in facilitating students’ learning and understanding of hydrologic concepts & increasing related skills. HydroViz was much more effective for students in engineering hydrology classes rather than at the freshmen introduction to civil engineering class. We found that HydroViz has great potential for freshmen audience. Even though HydroViz was challenging to some freshmen, most of them still learned the key concepts and the tool increased the enthusiasm for half of the freshmen. The evaluation provided suggestions to create a simplified version of HydroViz for freshmen-level courses students. It identified concepts and tasks that might be too challenging or irrelevant to the freshmen and areas where we could provide more guidance in the tool. After the first round of evaluation, the development team has made significant improvements to HydroViz, which would further improve its effectiveness for next round of class applications which is planned for the Fall of 2010 to take place in 5 classes at 4 different institutions.

  14. Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Evan; Bourassa, Norm; Rainer, Leo

    A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.

  15. Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Evan; Bourassa, Norm; Rainer, Leo

    2016-04-22

    A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.

  16. The Effect of Individual Differences on Searching the Web.

    ERIC Educational Resources Information Center

    Ihadjadene, Madjid; Chaudiron, Stephanne,; Martins, Daniel

    2003-01-01

    Reports results from a project that investigated the influence of two types of expertise--knowledge of the search domain and experience of the Web search engines--on students' use of a Web search engine. Results showed participants with good knowledge in the domain and participants with high experience of the Web had the best performances. (AEF)

  17. Photonics and web engineering in Poland, WILGA 2009

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2009-06-01

    The paper is a digest of work presented during a cyclic Ph.D. student symposium on Photonics and Web Engineering WILGA 2009. Symposium is organized by ISE PW in cooperation with professional organizations IEEE, SPIE, PSP and KEiT PAN. There are presented mainly Ph.D. and M.Sc. theses as well as achievements of young researchers. These papers, presented in such a big number, more than 250 in some years, are in certain sense a good digest of the condition of academic research capabilities in this branch of science and technology. The undertaken research subjects for Ph.D. theses in electronics is determined by the interest and research capacity (financial, laboratory and intellectual) of the young researchers and their tutors. Basically, the condition of academic electronics research depends on financing coming from applications areas. During Wilga 2009 there were organized, and thus the paper debates, the following topical sessions concerning applications of advanced electronics and photonics systems: merging of electronic systems and photonics, Internet engineering, distributed measurement systems, security in information technology, astronomy and cosmic technology, HEP experiments, environment protection, image processing and biometry. The paper contains also more general remarks concerning the workshops organized by and for the Ph.D. students in advanced photonics and electronics systems.

  18. Enlisting User Community Perspectives to Inform Development of a Semantic Web Application for Discovery of Cross-Institutional Research Information and Data

    NASA Astrophysics Data System (ADS)

    Johns, E. M.; Mayernik, M. S.; Boler, F. M.; Corson-Rikert, J.; Daniels, M. D.; Gross, M. B.; Khan, H.; Maull, K. E.; Rowan, L. R.; Stott, D.; Williams, S.; Krafft, D. B.

    2015-12-01

    Researchers seek information and data through a variety of avenues: published literature, search engines, repositories, colleagues, etc. In order to build a web application that leverages linked open data to enable multiple paths for information discovery, the EarthCollab project has surveyed two geoscience user communities to consider how researchers find and share scholarly output. EarthCollab, a cross-institutional, EarthCube funded project partnering UCAR, Cornell University, and UNAVCO, is employing the open-source semantic web software, VIVO, as the underlying technology to connect the people and resources of virtual research communities. This study will present an analysis of survey responses from members of the two case study communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. The survey results illustrate the types of research products that respondents indicate should be discoverable within a digital platform and the current methods used to find publications, data, personnel, tools, and instrumentation. The responses showed that scientists rely heavily on general purpose search engines, such as Google, to find information, but that data center websites and the published literature were also critical sources for finding collaborators, data, and research tools.The survey participants also identify additional features of interest for an information platform such as search engine indexing, connection to institutional web pages, generation of bibliographies and CVs, and outward linking to social media. Through the survey, the user communities prioritized the type of information that is most important to display and describe their work within a research profile. The analysis of this survey will inform our further development of a platform that will facilitate different types of information discovery strategies, and help researchers to find and use the associated resources of a research project.

  19. Text categorization models for identifying unproven cancer treatments on the web.

    PubMed

    Aphinyanaphongs, Yin; Aliferis, Constantin

    2007-01-01

    The nature of the internet as a non-peer-reviewed (and largely unregulated) publication medium has allowed wide-spread promotion of inaccurate and unproven medical claims in unprecedented scale. Patients with conditions that are not currently fully treatable are particularly susceptible to unproven and dangerous promises about miracle treatments. In extreme cases, fatal adverse outcomes have been documented. Most commonly, the cost is financial, psychological, and delayed application of imperfect but proven scientific modalities. To help protect patients, who may be desperately ill and thus prone to exploitation, we explored the use of machine learning techniques to identify web pages that make unproven claims. This feasibility study shows that the resulting models can identify web pages that make unproven claims in a fully automatic manner, and substantially better than previous web tools and state-of-the-art search engine technology.

  20. The Web: Can We Make It Easier To Find Information?

    ERIC Educational Resources Information Center

    Maddux, Cleborne D.

    1999-01-01

    Reviews problems with the World Wide Web that can be attributed to human error or ineptitude, and provides suggestions for improvement. Discusses poor Web design, poor use of search engines, and poor quality control by search engines and directories. (AEF)

  1. A web application for automatic prediction of gene translation elongation efficiency.

    PubMed

    Sokolov, Vladimir S; Zuraev, Bulat S; Lashin, Sergei A; Matushkin, Yury G

    2015-03-01

    Expression efficiency is one of the major characteristics describing genes in various modern investigations. Expression efficiency of genes is regulated at various stages: transcription, translation, posttranslational protein modification and others. In this study, a special EloE (Elongation Efficiency) web application is described. The EloE sorts the organism's genes in a descend order on their theoretical rate of the elongation stage of translation based on the analysis of their nucleotide sequences. Obtained theoretical data have a significant correlation with available experimental data of gene expression in various organisms. In addition, the program identifies preferential codons in organism's genes and defines distribution of potential secondary structures energy in 5´ and 3´ regions of mRNA. The EloE can be useful in preliminary estimation of translation elongation efficiency for genes for which experimental data are not available yet. Some results can be used, for instance, in other programs modeling artificial genetic structures in genetically engineered experiments. The EloE web application is available at http://www-bionet.sscc.ru:7780/EloE.

  2. Tethys: A Platform for Water Resources Modeling and Decision Support Apps

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Swain, N. R.

    2015-12-01

    The interactive nature of web applications or "web apps" makes it an excellent medium for conveying complex scientific concepts to lay audiences and creating decision support tools that harness cutting edge modeling techniques. However, the technical expertise required to develop web apps represents a barrier for would-be developers. This barrier can be characterized by the following hurdles that developers must overcome: (1) identify, select, and install software that meet the spatial and computational capabilities commonly required for water resources modeling; (2) orchestrate the use of multiple free and open source (FOSS) projects and navigate their differing application programming interfaces; (3) learn the multi-language programming skills required for modern web development; and (4) develop a web-secure and fully featured web portal to host the app. Tethys Platform has been developed to lower the technical barrier and minimize the initial development investment that prohibits many scientists and engineers from making use of the web app medium. It includes (1) a suite of FOSS that address the unique data and computational needs common to water resources web app development, (2) a Python software development kit that streamlines development, and (3) a customizable web portal that is used to deploy the completed web apps. Tethys synthesizes several software projects including PostGIS, 52°North WPS, GeoServer, Google Maps™, OpenLayers, and Highcharts. It has been used to develop a broad array of web apps for water resources modeling and decision support for several projects including CI-WATER, HydroShare, and the National Flood Interoperability Experiment. The presentation will include live demos of some of the apps that have been developed using Tethys to demonstrate its capabilities.

  3. LaboREM--A Remote Laboratory for Game-Like Training in Electronics

    ERIC Educational Resources Information Center

    Luthon, Franck; Larroque, Benoît

    2015-01-01

    The advances in communication networks and web technologies, in conjunction with the improved connectivity of test and measurement devices make it possible to implement e-learning applications that encompass the whole learning process. In the field of electrical engineering, automation or mechatronics, it means not only lectures, tutorials, demos…

  4. 76 FR 1148 - CRD Hydroelectric LLC, Iowa; Notice of Availability of Environmental Assessment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-07

    ... reviewed the application for an original license for the Red Rock Hydroelectric Project (FERC Project No... Engineers' Red Rock Dam. Staff prepared an environmental assessment (EA), which analyzes the potential... the Public Reference Room or may be viewed on the Commission's Web site at http://www.ferc.gov using...

  5. 75 FR 1655 - Biweekly Notice Applications and Amendments to Facility Operating Licenses Involving No...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-12

    ... environmental impact statement or environmental assessment need be prepared for these amendments. If the... (ADAMS) Public Electronic Reading Room on the internet at the NRC Web site, http://www.nrc.gov/reading-rm... Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code, Section XI as the source of...

  6. Accelerator Technology and High Energy Physics Experiments, Photonics Applications and Web Engineering, Wilga, May 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    The paper is the second part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with accelerator technology and high energy physics experiments. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the XXXth Jubilee SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonicselectronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET and pi-of-the sky experiments development. The symposium is an annual summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also a great occasion for SPIE, IEEE, OSA and PSP students to meet together in a large group spanning the whole country with guests from this part of Europe. A digest of Wilga references is presented [1-275].

  7. Photon Physics and Plasma Research, Photonics Applications and Web Engineering, Wilga, May 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    This paper is the third part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with Photon Physics and Plasma Research. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the Jubilee XXXth SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET tokamak and pi-of-the sky experiments development. The symposium is an annual summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also a great occasion for SPIE, IEEE, OSA and PSP students to meet together in a large group spanning the whole country with guests from this part of Europe. A digest of Wilga references is presented [1-270].

  8. How to Boost Engineering Support Via Web 2.0 - Seeds for the Ares Project...and/or Yours?

    NASA Technical Reports Server (NTRS)

    Scott, David W.

    2010-01-01

    The Mission Operations Laboratory (MOL) at Marshall Space Flight Center (MSFC) is responsible for Engineering Support capability for NASA s Ares launch system development. In pursuit of this, MOL is building the Ares Engineering and Operations Network (AEON), a web-based portal intended to provide a seamless interface to support and simplify two critical activities: a) Access and analyze Ares manufacturing, test, and flight performance data, with access to Shuttle data for comparison. b) Provide archive storage for engineering instrumentation data to support engineering design, development, and test. A mix of NASA-written and COTS software provides engineering analysis tools. A by-product of using a data portal to access and display data is access to collaborative tools inherent in a Web 2.0 environment. This paper discusses how Web 2.0 techniques, particularly social media, might be applied to the traditionally conservative and formal engineering support arena. A related paper by the author [1] considers use

  9. Development of Web-Based Learning Environment Model to Enhance Cognitive Skills for Undergraduate Students in the Field of Electrical Engineering

    ERIC Educational Resources Information Center

    Lakonpol, Thongmee; Ruangsuwan, Chaiyot; Terdtoon, Pradit

    2015-01-01

    This research aimed to develop a web-based learning environment model for enhancing cognitive skills of undergraduate students in the field of electrical engineering. The research is divided into 4 phases: 1) investigating the current status and requirements of web-based learning environment models. 2) developing a web-based learning environment…

  10. ReSEARCH: A Requirements Search Engine: Progress Report 2

    DTIC Science & Technology

    2008-09-01

    and provides a convenient user interface for the search process. Ideally, the web application would be based on Tomcat, a free Java Servlet and JSP...Implementation issues Lucene Java is an Open Source project, available under the Apache License, which provides an accessible API for the development of...from the Apache Lucene website (Lucene- java Wiki , 2008). A search application developed with Lucene consists of the same two major com- ponents

  11. Electrospinning of Biocompatible Nanofibers

    NASA Astrophysics Data System (ADS)

    Coughlin, Andrew J.; Queen, Hailey A.; McCullen, Seth D.; Krause, Wendy E.

    2006-03-01

    Artificial scaffolds for growing cells can have a wide range of applications including wound coverings, supports in tissue cultures, drug delivery, and organ and tissue transplantation. Tissue engineering is a promising field which may resolve current problems with transplantation, such as rejection by the immune system and scarcity of donors. One approach to tissue engineering utilizes a biodegradable scaffold onto which cells are seeded and cultured, and ideally develop into functional tissue. The scaffold acts as an artificial extracellular matrix (ECM). Because a typical ECM contains collagen fibers with diameters of 50-500 nm, electrostatic spinning (electrospinning) was used to mimic the size and structure of these fibers. Electrospinning is a novel way of spinning a nonwoven web of fibers on the order of 100 nm, much like the web of collagen in an ECM. We are investigating the ability of several biocompatible polymers (e.g., chitosan and polyvinyl alcohol) to form defect-free nanofiber webs and are studying the influence of the zero shear rate viscosity, molecular weight, entanglement concentration, relaxation time, and solvent on the resulting fiber size and morphology.

  12. Custom-shaping system for bone regeneration by seeding marrow stromal cells onto a web-like biodegradable hybrid sheet.

    PubMed

    Tsuchiya, Kohei; Mori, Taisuke; Chen, Guoping; Ushida, Takashi; Tateishi, Tetsuya; Matsuno, Takeo; Sakamoto, Michiie; Umezawa, Akihiro

    2004-05-01

    New bone for the repair or the restoration of the function of traumatized, damaged, or lost bone is a major clinical need, and bone tissue engineering has been heralded as an alternative strategy for regenerating bone. A novel web-like structured biodegradable hybrid sheet has been developed for bone tissue engineering by preparing knitted poly(DL-lactic-co-glycolic acid) sheets (PLGA sheets) with collagen microsponges in their openings. The PLGA skeleton facilitates the formation of the hybrid sheets into desired shapes, and the collagen microsponges in the pores of the PLGA sheet promote cell adhesion and uniform cell distribution throughout the sheet. A large number of osteoblasts established from marrow stroma adhere to the scaffolds and generate the desired-shaped bone in combination with these novel sheets. These results indicate that the web-like structured novel sheet shows promise for use as a tool for custom-shaped bone regeneration in basic research on osteogenesis and for the development of therapeutic applications. Copyright 2004 Springer-Verlag

  13. Developing creativity and problem-solving skills of engineering students: a comparison of web- and pen-and-paper-based approaches

    NASA Astrophysics Data System (ADS)

    Valentine, Andrew; Belski, Iouri; Hamilton, Margaret

    2017-11-01

    Problem-solving is a key engineering skill, yet is an area in which engineering graduates underperform. This paper investigates the potential of using web-based tools to teach students problem-solving techniques without the need to make use of class time. An idea generation experiment involving 90 students was designed. Students were surveyed about their study habits and reported they use electronic-based materials more than paper-based materials while studying, suggesting students may engage with web-based tools. Students then generated solutions to a problem task using either a paper-based template or an equivalent web interface. Students who used the web-based approach performed as well as students who used the paper-based approach, suggesting the technique can be successfully adopted and taught online. Web-based tools may therefore be adopted as supplementary material in a range of engineering courses as a way to increase students' options for enhancing problem-solving skills.

  14. Eyes on the Earth 3D

    NASA Technical Reports Server (NTRS)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  15. The Technical Work Plan Tracking Tool

    NASA Technical Reports Server (NTRS)

    Chullen, Cinda; Leighton, Adele; Weller, Richard A.; Woodfill, Jared; Parkman, William E.; Ellis, Glenn L.; Wilson, Marilyn M.

    2003-01-01

    The Technical Work Plan Tracking Tool is a web-based application that enables interactive communication and approval of contract requirements that pertain to the administration of the Science, Engineering, Analysis, and Test (SEAT) contract at Johnson Space Center. The implementation of the application has (1) shortened the Technical Work Plan approval process, (2) facilitated writing and documenting requirements in a performance-based environment with associated surveillance plans, (3) simplified the contractor s estimate of the cost for the required work, and (4) allowed for the contractor to document how they plan to accomplish the work. The application is accessible to over 300 designated NASA and contractor employees via two Web sites. For each employee, the application regulates access according to the employee s authority to enter, view, and/or print out diverse information, including reports, work plans, purchase orders, and financial data. Advanced features of this application include on-line approval capability, automatic e-mail notifications requesting review by subsequent approvers, and security inside and outside the firewall.

  16. Smart Aerospace eCommerce: Using Intelligent Agents in a NASA Mission Services Ordering Application

    NASA Technical Reports Server (NTRS)

    Moleski, Walt; Luczak, Ed; Morris, Kim; Clayton, Bill; Scherf, Patricia; Obenschain, Arthur F. (Technical Monitor)

    2002-01-01

    This paper describes how intelligent agent technology was successfully prototyped and then deployed in a smart eCommerce application for NASA. An intelligent software agent called the Intelligent Service Validation Agent (ISVA) was added to an existing web-based ordering application to validate complex orders for spacecraft mission services. This integration of intelligent agent technology with conventional web technology satisfies an immediate NASA need to reduce manual order processing costs. The ISVA agent checks orders for completeness, consistency, and correctness, and notifies users of detected problems. ISVA uses NASA business rules and a knowledge base of NASA services, and is implemented using the Java Expert System Shell (Jess), a fast rule-based inference engine. The paper discusses the design of the agent and knowledge base, and the prototyping and deployment approach. It also discusses future directions and other applications, and discusses lessons-learned that may help other projects make their aerospace eCommerce applications smarter.

  17. The Effectiveness of Web Search Engines to Index New Sites from Different Countries

    ERIC Educational Resources Information Center

    Pirkola, Ari

    2009-01-01

    Introduction: Investigates how effectively Web search engines index new sites from different countries. The primary interest is whether new sites are indexed equally or whether search engines are biased towards certain countries. If major search engines show biased coverage it can be considered a significant economic and political problem because…

  18. Alex Swindler | NREL

    Science.gov Websites

    distributed computing, Web information systems engineering, software engineering, computer graphics, and Dashboard, NREL Energy Story visualization, Green Button data integration, as well as a large number of Web of an R&D 100 Award. Prior to joining NREL, Alex worked as a system administrator, Web developer

  19. Estimating search engine index size variability: a 9-year longitudinal study.

    PubMed

    van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.

  20. Network Analysis of Reconnaissance and Intrusion of an Industrial Control System

    DTIC Science & Technology

    2016-09-01

    simulated a plant engineer using the engineering workstation web browser to authenticate to the vegetable cooker HMI. While the engineer established the...observed the vegetable cooker HMI web display, the attacker stopped capturing network traffic. Acting as the attacker, we searched the attacker’s pcap...manually controlled by human activity. In this testbed network, only web browser traffic (HTTP) is created by an operator to view an HMI status

  1. Search Engine Ranking, Quality, and Content of Web Pages That Are Critical Versus Noncritical of Human Papillomavirus Vaccine.

    PubMed

    Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G

    2016-01-01

    Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p < .01) and covered fewer important HPV-related topics (p < .001). Critical Web pages required viewers to have higher reading skills, were less likely to include an author byline, and were more likely to include testimonial accounts. They also were more likely to raise unsubstantiated concerns about vaccination. Web pages critical of HPV vaccine may be frequently returned and highly ranked by search engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  2. Agility: Agent - Ility Architecture

    DTIC Science & Technology

    2002-10-01

    existing and emerging standards (e.g., distributed objects, email, web, search engines , XML, Java, Jini). Three agent system components resulted from...agents and other Internet resources and operate over the web (AgentGram), a yellow pages service that uses Internet search engines to locate XML ads for agents and other Internet resources (WebTrader).

  3. Incorporating the Internet into Traditional Library Instruction.

    ERIC Educational Resources Information Center

    Fonseca, Tony; King, Monica

    2000-01-01

    Presents a template for teaching traditional library research and one for incorporating the Web. Highlights include the differences between directories and search engines; devising search strategies; creating search terms; how to choose search engines; evaluating online resources; helpful Web sites; and how to read URLs to evaluate a Web site's…

  4. Children's Search Engines from an Information Search Process Perspective.

    ERIC Educational Resources Information Center

    Broch, Elana

    2000-01-01

    Describes cognitive and affective characteristics of children and teenagers that may affect their Web searching behavior. Reviews literature on children's searching in online public access catalogs (OPACs) and using digital libraries. Profiles two Web search engines. Discusses some of the difficulties children have searching the Web, in the…

  5. Searchers Net Treasure in Monterey.

    ERIC Educational Resources Information Center

    McDermott, Irene E.

    1999-01-01

    Reports on Web keyword searching, metadata, Dublin Core, Extensible Markup Language (XML), metasearch engines (metasearch engines search several Web indexes and/or directories and/or Usenet and/or specific Web sites), and the Year 2000 (Y2K) dilemma, all topics discussed at the second annual Internet Librarian Conference sponsored by Information…

  6. Searching for American Indian Resources on the Internet.

    ERIC Educational Resources Information Center

    Pollack, Ira; Derby, Amy

    This paper provides basic information on searching the Internet and lists World Wide Web sites containing resources for American Indian education. Comprehensive and topical Web directories, search engines, and meta-search engines are briefly described. Search strategies are discussed, and seven Web sites are listed that provide more advanced…

  7. Exploiting Semantic Web Technologies to Develop OWL-Based Clinical Practice Guideline Execution Engines.

    PubMed

    Jafarpour, Borna; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2016-01-01

    Computerizing paper-based CPG and then executing them can provide evidence-informed decision support to physicians at the point of care. Semantic web technologies especially web ontology language (OWL) ontologies have been profusely used to represent computerized CPG. Using semantic web reasoning capabilities to execute OWL-based computerized CPG unties them from a specific custom-built CPG execution engine and increases their shareability as any OWL reasoner and triple store can be utilized for CPG execution. However, existing semantic web reasoning-based CPG execution engines suffer from lack of ability to execute CPG with high levels of expressivity, high cognitive load of computerization of paper-based CPG and updating their computerized versions. In order to address these limitations, we have developed three CPG execution engines based on OWL 1 DL, OWL 2 DL and OWL 2 DL + semantic web rule language (SWRL). OWL 1 DL serves as the base execution engine capable of executing a wide range of CPG constructs, however for executing highly complex CPG the OWL 2 DL and OWL 2 DL + SWRL offer additional executional capabilities. We evaluated the technical performance and medical correctness of our execution engines using a range of CPG. Technical evaluations show the efficiency of our CPG execution engines in terms of CPU time and validity of the generated recommendation in comparison to existing CPG execution engines. Medical evaluations by domain experts show the validity of the CPG-mediated therapy plans in terms of relevance, safety, and ordering for a wide range of patient scenarios.

  8. Integrated gas turbine engine-nacelle

    NASA Technical Reports Server (NTRS)

    Adamson, A. P.; Sargisson, D. F.; Stotler, C. L., Jr. (Inventor)

    1979-01-01

    A nacelle for use with a gas turbine engine is provided with an integral webbed structure resembling a spoked wheel for rigidly interconnecting the nacelle and engine. The nacelle is entirely supported in its spacial relationship with the engine by means of the webbed structure. The inner surface of the nacelle defines the outer limits of the engine motive fluid flow annulus, while the outer surface of the nacelle defines a streamlined envelope for the engine.

  9. SemanticOrganizer: A Customizable Semantic Repository for Distributed NASA Project Teams

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Berrios, Daniel C.; Carvalho, Robert E.; Hall, David R.; Rich, Stephen J.; Sturken, Ian B.; Swanson, Keith J.; Wolfe, Shawn R.

    2004-01-01

    SemanticOrganizer is a collaborative knowledge management system designed to support distributed NASA projects, including diverse teams of scientists, engineers, and accident investigators. The system provides a customizable, semantically structured information repository that stores work products relevant to multiple projects of differing types. SemanticOrganizer is one of the earliest and largest semantic web applications deployed at NASA to date, and has been used in diverse contexts ranging from the investigation of Space Shuttle Columbia's accident to the search for life on other planets. Although the underlying repository employs a single unified ontology, access control and ontology customization mechanisms make the repository contents appear different for each project team. This paper describes SemanticOrganizer, its customization facilities, and a sampling of its applications. The paper also summarizes some key lessons learned from building and fielding a successful semantic web application across a wide-ranging set of domains with diverse users.

  10. Spiders and Worms and Crawlers, Oh My: Searching on the World Wide Web.

    ERIC Educational Resources Information Center

    Eagan, Ann; Bender, Laura

    Searching on the world wide web can be confusing. A myriad of search engines exist, often with little or no documentation, and many of these search engines work differently from the standard search engines people are accustomed to using. Intended for librarians, this paper defines search engines, directories, spiders, and robots, and covers basics…

  11. Sexual information seeking on web search engines.

    PubMed

    Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles

    2004-02-01

    Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed.

  12. Challenges for Rule Systems on the Web

    NASA Astrophysics Data System (ADS)

    Hu, Yuh-Jong; Yeh, Ching-Long; Laun, Wolfgang

    The RuleML Challenge started in 2007 with the objective of inspiring the issues of implementation for management, integration, interoperation and interchange of rules in an open distributed environment, such as the Web. Rules are usually classified as three types: deductive rules, normative rules, and reactive rules. The reactive rules are further classified as ECA rules and production rules. The study of combination rule and ontology is traced back to an earlier active rule system for relational and object-oriented (OO) databases. Recently, this issue has become one of the most important research problems in the Semantic Web. Once we consider a computer executable policy as a declarative set of rules and ontologies that guides the behavior of entities within a system, we have a flexible way to implement real world policies without rewriting the computer code, as we did before. Fortunately, we have de facto rule markup languages, such as RuleML or RIF to achieve the portability and interchange of rules for different rule systems. Otherwise, executing real-life rule-based applications on the Web is almost impossible. Several commercial or open source rule engines are available for the rule-based applications. However, we still need a standard rule language and benchmark for not only to compare the rule systems but also to measure the progress in the field. Finally, a number of real-life rule-based use cases will be investigated to demonstrate the applicability of current rule systems on the Web.

  13. Wilga Photonics and Web Engineering 2011

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2011-10-01

    The paper presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the SPIE-IEEE Wilga 2011 symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET and pi-of-the sky experiments development. The symposium is an annual summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also a great occasion for SPIE, IEEE, OSA and PSP students to meet together in a large group spanning the whole country with guests from this part of Europe. A digest of Wilga references is presented [1-225].

  14. WILGA Photonics and Web Engineering, January 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    The paper presents a digest of chosen technical work results shown by young researchers from technical universities during the SPIE-IEEE Wilga January 2012 Symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, new technologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics codesign, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET and pi-of-the sky experiments development. The symposium held two times a year is a summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also a great occasion for SPIE, IEEE, OSA and PSP students to meet together in a large group spanning the whole country with guests from this part of Europe. A digest of chosen Wilga references is presented [1-268].

  15. Comparative Analysis of Rank Aggregation Techniques for Metasearch Using Genetic Algorithm

    ERIC Educational Resources Information Center

    Kaur, Parneet; Singh, Manpreet; Singh Josan, Gurpreet

    2017-01-01

    Rank Aggregation techniques have found wide applications for metasearch along with other streams such as Sports, Voting System, Stock Markets, and Reduction in Spam. This paper presents the optimization of rank lists for web queries put by the user on different MetaSearch engines. A metaheuristic approach such as Genetic algorithm based rank…

  16. Web Environment for Programming and Control of a Mobile Robot in a Remote Laboratory

    ERIC Educational Resources Information Center

    dos Santos Lopes, Maísa Soares; Gomes, Iago Pacheco; Trindade, Roque M. P.; da Silva, Alzira F.; de C. Lima, Antonio C.

    2017-01-01

    Remote robotics laboratories have been successfully used for engineering education. However, few of them use mobile robots to to teach computer science. This article describes a mobile robot Control and Programming Environment (CPE) and its pedagogical applications. The system comprises a remote laboratory for robotics, an online programming tool,…

  17. Surreptitious, Evolving and Participative Ontology Development: An End-User Oriented Ontology Development Methodology

    ERIC Educational Resources Information Center

    Bachore, Zelalem

    2012-01-01

    Ontology not only is considered to be the backbone of the semantic web but also plays a significant role in distributed and heterogeneous information systems. However, ontology still faces limited application and adoption to date. One of the major problems is that prevailing engineering-oriented methodologies for building ontologies do not…

  18. Business logic for geoprocessing of distributed geodata

    NASA Astrophysics Data System (ADS)

    Kiehle, Christian

    2006-12-01

    This paper describes the development of a business-logic component for the geoprocessing of distributed geodata. The business logic acts as a mediator between the data and the user, therefore playing a central role in any spatial information system. The component is used in service-oriented architectures to foster the reuse of existing geodata inventories. Based on a geoscientific case study of groundwater vulnerability assessment and mapping, the demands for such architectures are identified with special regard to software engineering tasks. Methods are derived from the field of applied Geosciences (Hydrogeology), Geoinformatics, and Software Engineering. In addition to the development of a business logic component, a forthcoming Open Geospatial Consortium (OGC) specification is introduced: the OGC Web Processing Service (WPS) specification. A sample application is introduced to demonstrate the potential of WPS for future information systems. The sample application Geoservice Groundwater Vulnerability is described in detail to provide insight into the business logic component, and demonstrate how information can be generated out of distributed geodata. This has the potential to significantly accelerate the assessment and mapping of groundwater vulnerability. The presented concept is easily transferable to other geoscientific use cases dealing with distributed data inventories. Potential application fields include web-based geoinformation systems operating on distributed data (e.g. environmental planning systems, cadastral information systems, and others).

  19. e-Learning development in medical physics and engineering

    PubMed Central

    Tabakov, S

    2008-01-01

    Medical Physics and Engineering was among the first professions to develop and apply e-Learning (e-L). The profession provides excellent background for application of simulations and other e-L materials. The paper describes several layers for e-L development: Programming specific simulations; Building e-L modules; Development of e-L web-based programmes. The paper shows examples from these layers and outlines their specificities. At the end, the newest e-L development (project EMITEL) is briefly introduced and the necessity of a regularly updated list of e-L activities is emphasised. PMID:21614312

  20. Publicizing Your Web Resources for Maximum Exposure.

    ERIC Educational Resources Information Center

    Smith, Kerry J.

    2001-01-01

    Offers advice to librarians for marketing their Web sites on Internet search engines. Advises against relying solely on spiders and recommends adding metadata to the source code and delivering that information directly to the search engines. Gives an overview of metadata and typical coding for meta tags. Includes Web addresses for a number of…

  1. Staleness Among Web Search Engines.

    ERIC Educational Resources Information Center

    Koehler, Wallace

    1998-01-01

    Describes a study of four major Web search engines that tested for staleness, a condition when a significant number of the hits it returns point to Web pages or server-level domains (SLD) that are no longer viable. Results of tests of URLs with AltaVista, HotBot, InfoSeek, and Open Text are discussed. (Author/LRW)

  2. Discovering How Students Search a Library Web Site: A Usability Case Study.

    ERIC Educational Resources Information Center

    Augustine, Susan; Greene, Courtney

    2002-01-01

    Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…

  3. Digging Deeper: The Deep Web.

    ERIC Educational Resources Information Center

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  4. Search strategies on the Internet: general and specific.

    PubMed

    Bottrill, Krys

    2004-06-01

    Some of the most up-to-date information on scientific activity is to be found on the Internet; for example, on the websites of academic and other research institutions and in databases of currently funded research studies provided on the websites of funding bodies. Such information can be valuable in suggesting new approaches and techniques that could be applicable in a Three Rs context. However, the Internet is a chaotic medium, not subject to the meticulous classification and organisation of classical information resources. At the same time, Internet search engines do not match the sophistication of search systems used by database hosts. Also, although some offer relatively advanced features, user awareness of these tends to be low. Furthermore, much of the information on the Internet is not accessible to conventional search engines, giving rise to the concept of the "Invisible Web". General strategies and techniques for Internet searching are presented, together with a comparative survey of selected search engines. The question of how the Invisible Web can be accessed is discussed, as well as how to keep up-to-date with Internet content and improve searching skills.

  5. A Taxonomic Search Engine: Federating taxonomic databases using web services

    PubMed Central

    Page, Roderic DM

    2005-01-01

    Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. Conclusion The Taxonomic Search Engine is available at and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names. PMID:15757517

  6. Automated Data Tagging in the HLA

    NASA Astrophysics Data System (ADS)

    Gaffney, N. I.; Miller, W. W.

    2008-08-01

    One of the more powerful and popular forms of data organization implemented in most popular information sharing web applications is data tagging. With a rich user base from which to gather and digest tags, many interesting and often unanticipated yet very useful associations are revealed. With regard to an existing information, the astronomical community has a rich pool of existing digitally stored and searchable data than any of the currently popular web community, such as You Tube or My Space, had when they started. In initial experiments with the search engine for the Hubble Legacy Archive, we have created a simple yet powerful scheme by which the information from a footprint service, the NED and SIMBAD catalog services, and the ADS abstracts and keywords can be used to initially tag data with standard keywords. By then ingesting this into a public ally available information search engine, such as Apache Lucene, one can create a simple and powerful data tag search engine and association system. By then augmenting this with user provided keys and usage pattern analysis, one can produce a powerful modern data mining system for any astronomical data warehouse.

  7. F-OWL: An Inference Engine for Semantic Web

    NASA Technical Reports Server (NTRS)

    Zou, Youyong; Finin, Tim; Chen, Harry

    2004-01-01

    Understanding and using the data and knowledge encoded in semantic web documents requires an inference engine. F-OWL is an inference engine for the semantic web language OWL language based on F-logic, an approach to defining frame-based systems in logic. F-OWL is implemented using XSB and Flora-2 and takes full advantage of their features. We describe how F-OWL computes ontology entailment and compare it with other description logic based approaches. We also describe TAGA, a trading agent environment that we have used as a test bed for F-OWL and to explore how multiagent systems can use semantic web concepts and technology.

  8. The Extreme Searcher's Guide to Web Search Engines: A Handbook for the Serious Searcher. 2nd Edition.

    ERIC Educational Resources Information Center

    Hock, Randolph

    This book aims to facilitate more effective and efficient use of World Wide Web search engines by helping the reader: know the basic structure of the major search engines; become acquainted with those attributes (features, benefits, options, content, etc.) that search engines have in common and where they differ; know the main strengths and…

  9. Moving Real Exergaming Engines on the Web: The webFitForAll Case Study in an Active and Healthy Ageing Living Lab Environment.

    PubMed

    Konstantinidis, Evdokimos I; Bamparopoulos, Giorgos; Bamidis, Panagiotis D

    2017-05-01

    Exergames have been the subject of research and technology innovations for a number of years. Different devices and technologies have been utilized to train the body and the mind of senior people or different patient groups. In the past, we presented FitForAll, the protocol efficacy of which was proven through widely taken (controlled) pilots with more than 116 seniors for a period of two months. The current piece of work expands this and presents the first truly web exergaming platform, which is solely based on HTML5 and JavaScript without any browser plugin requirements. The adopted architecture (controller application communication framework) combines a unified solution for input devices such as MS Kinect and Wii Balance Βoard which may seamlessly be exploited through standard physical exercise protocols (American College of Sports Medicine guidelines) and accommodate high detail logging; this allows for proper pilot testing and usability evaluations in ecologically valid Living Lab environments. The latter type of setups is also used herein for evaluating the web application with more than a dozen of real elderly users following quantitative approaches.

  10. Expert system for web based collaborative CAE

    NASA Astrophysics Data System (ADS)

    Hou, Liang; Lin, Zusheng

    2006-11-01

    An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.

  11. Search Engines on the World Wide Web.

    ERIC Educational Resources Information Center

    Walster, Dian

    1997-01-01

    Discusses search engines and provides methods for determining what resources are searched, the quality of the information, and the algorithms used that will improve the use of search engines on the World Wide Web, online public access catalogs, and electronic encyclopedias. Lists strategies for conducting searches and for learning about the latest…

  12. Interactive Information Organization: Techniques and Evaluation

    DTIC Science & Technology

    2001-05-01

    information search and access. Locating interesting information on the World Wide Web is the main task of on-line search engines . Such engines accept a...likelihood of being relevant to the user’s request. The majority of today’s Web search engines follow this scenario. The ordering of documents in the

  13. Quality analysis of patient information about knee arthroscopy on the World Wide Web.

    PubMed

    Sambandam, Senthil Nathan; Ramasamy, Vijayaraj; Priyanka, Priyanka; Ilango, Balakrishnan

    2007-05-01

    This study was designed to ascertain the quality of patient information available on the World Wide Web on the topic of knee arthroscopy. For the purpose of quality analysis, we used a pool of 232 search results obtained from 7 different search engines. We used a modified assessment questionnaire to assess the quality of these Web sites. This questionnaire was developed based on similar studies evaluating Web site quality and includes items on illustrations, accessibility, availability, accountability, and content of the Web site. We also compared results obtained with different search engines and tried to establish the best possible search strategy to attain the most relevant, authentic, and adequate information with minimum time consumption. For this purpose, we first compared 100 search results from the single most commonly used search engine (AltaVista) with the pooled sample containing 20 search results from each of the 7 different search engines. The search engines used were metasearch (Copernic and Mamma), general search (Google, AltaVista, and Yahoo), and health topic-related search engines (MedHunt and Healthfinder). The phrase "knee arthroscopy" was used as the search terminology. Excluding the repetitions, there were 117 Web sites available for quality analysis. These sites were analyzed for accessibility, relevance, authenticity, adequacy, and accountability by use of a specially designed questionnaire. Our analysis showed that most of the sites providing patient information on knee arthroscopy contained outdated information, were inadequate, and were not accountable. Only 16 sites were found to be providing reasonably good patient information and hence can be recommended to patients. Understandably, most of these sites were from nonprofit organizations and educational institutions. Furthermore, our study revealed that using multiple search engines increases patients' chances of obtaining more relevant information rather than using a single search engine. Our study shows the difficulties encountered by patients in obtaining information regarding knee arthroscopy and highlights the duty of knee surgeons in helping patients to identify the relevant and authentic information in the most efficient manner from the World Wide Web. This study highlights the importance of the role of orthopaedic surgeons in helping their patients to identify the best possible information on the World Wide Web.

  14. GoWeb: a semantic search engine for the life science web.

    PubMed

    Dietze, Heiko; Schroeder, Michael

    2009-10-01

    Current search engines are keyword-based. Semantic technologies promise a next generation of semantic search engines, which will be able to answer questions. Current approaches either apply natural language processing to unstructured text or they assume the existence of structured statements over which they can reason. Here, we introduce a third approach, GoWeb, which combines classical keyword-based Web search with text-mining and ontologies to navigate large results sets and facilitate question answering. We evaluate GoWeb on three benchmarks of questions on genes and functions, on symptoms and diseases, and on proteins and diseases. The first benchmark is based on the BioCreAtivE 1 Task 2 and links 457 gene names with 1352 functions. GoWeb finds 58% of the functional GeneOntology annotations. The second benchmark is based on 26 case reports and links symptoms with diseases. GoWeb achieves 77% success rate improving an existing approach by nearly 20%. The third benchmark is based on 28 questions in the TREC genomics challenge and links proteins to diseases. GoWeb achieves a success rate of 79%. GoWeb's combination of classical Web search with text-mining and ontologies is a first step towards answering questions in the biomedical domain. GoWeb is online at: http://www.gopubmed.org/goweb.

  15. Web sites for postpartum depression: convenient, frustrating, incomplete, and misleading.

    PubMed

    Summers, Audra L; Logsdon, M Cynthia

    2005-01-01

    To evaluate the content and the technology of Web sites providing information on postpartum depression. Eleven search engines were queried using the words "Postpartum Depression." The top 10 sites in each search engine were evaluated for correct content and technology using the Web Depression Tool, based on the Technology Assessment Model. Of the 36 unique Web sites located, 34 were available to review. Only five Web sites provided >75% correct responses to questions that summarized the current state of the science for postpartum depression. Eleven of the Web sites contained little or no useful information about postpartum depression, despite being among the first 10 Web sites listed by the search engine. Some Web sites contained possibly harmful suggestions for treatment of postpartum depression. In addition, there are many problems with the technology of Web sites providing information on postpartum depression. A better Web site for postpartum depression is necessary if we are to meet the needs of consumers for accurate and current information using technology that enhances learning. Since patient education is a core competency for nurses, it is essential that nurses understand how their patients are using the World Wide Web for learning and how we can assist our patients to find appropriate sites containing correct information.

  16. Critical Reading of the Web

    ERIC Educational Resources Information Center

    Griffin, Teresa; Cohen, Deb

    2012-01-01

    The ubiquity and familiarity of the world wide web means that students regularly turn to it as a source of information. In doing so, they "are said to rely heavily on simple search engines, such as Google to find what they want." Researchers have also investigated how students use search engines, concluding that "the young web users tended to…

  17. E-Referencer: Transforming Boolean OPACs to Web Search Engines.

    ERIC Educational Resources Information Center

    Khoo, Christopher S. G.; Poo, Danny C. C.; Toh, Teck-Kang; Hong, Glenn

    E-Referencer is an expert intermediary system for searching library online public access catalogs (OPACs) on the World Wide Web. It is implemented as a proxy server that mediates the interaction between the user and Boolean OPACs. It transforms a Boolean OPAC into a retrieval system with many of the search capabilities of Web search engines.…

  18. Web Search Engines: Key To Locating Information for All Users or Only the Cognoscenti?

    ERIC Educational Resources Information Center

    Tomaiuolo, Nicholas G.; Packer, Joan G.

    This paper describes a study that attempted to ascertain the degree of success that undergraduates and graduate students, with varying levels of experience using the World Wide Web and Web search engines, and without librarian instruction or intervention, had in locating relevant material on specific topics furnished by the investigators. Because…

  19. Dcs Data Viewer, an Application that Accesses ATLAS DCS Historical Data

    NASA Astrophysics Data System (ADS)

    Tsarouchas, C.; Schlenker, S.; Dimitrov, G.; Jahn, G.

    2014-06-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. The Detector Control System (DCS) of ATLAS is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database (DB). DCS Data Viewer (DDV) is an application that provides access to the ATLAS DCS historical data through a web interface. Its design is structured using a client-server architecture. The pythonic server connects to the DB and fetches the data by using optimized SQL requests. It communicates with the outside world, by accepting HTTP requests and it can be used stand alone. The client is an AJAX (Asynchronous JavaScript and XML) interactive web application developed under the Google Web Toolkit (GWT) framework. Its web interface is user friendly, platform and browser independent. The selection of metadata is done via a column-tree view or with a powerful search engine. The final visualization of the data is done using java applets or java script applications as plugins. The default output is a value-over-time chart, but other types of outputs like tables, ascii or ROOT files are supported too. Excessive access or malicious use of the database is prevented by a dedicated protection mechanism, allowing the exposure of the tool to hundreds of inexperienced users. The current configuration of the client and of the outputs can be saved in an XML file. Protection against web security attacks is foreseen and authentication constrains have been taken into account, allowing the exposure of the tool to hundreds of users world wide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems.

  20. Bottled SAFT: A Web App Providing SAFT-γ Mie Force Field Parameters for Thousands of Molecular Fluids.

    PubMed

    Ervik, Åsmund; Mejía, Andrés; Müller, Erich A

    2016-09-26

    Coarse-grained molecular simulation has become a popular tool for modeling simple and complex fluids alike. The defining aspects of a coarse grained model are the force field parameters, which must be determined for each particular fluid. Because the number of molecular fluids of interest in nature and in engineering processes is immense, constructing force field parameter tables by individually fitting to experimental data is a futile task. A step toward solving this challenge was taken recently by Mejía et al., who proposed a correlation that provides SAFT-γ Mie force field parameters for a fluid provided one knows the critical temperature, the acentric factor and a liquid density, all relatively accessible properties. Building on this, we have applied the correlation to more than 6000 fluids, and constructed a web application, called "Bottled SAFT", which makes this data set easily searchable by CAS number, name or chemical formula. Alternatively, the application allows the user to calculate parameters for components not present in the database. Once the intermolecular potential has been found through Bottled SAFT, code snippets are provided for simulating the desired substance using the "raaSAFT" framework, which leverages established molecular dynamics codes to run the simulations. The code underlying the web application is written in Python using the Flask microframework; this allows us to provide a modern high-performance web app while also making use of the scientific libraries available in Python. Bottled SAFT aims at taking the complexity out of obtaining force field parameters for a wide range of molecular fluids, and facilitates setting up and running coarse-grained molecular simulations. The web application is freely available at http://www.bottledsaft.org . The underlying source code is available on Bitbucket under a permissive license.

  1. Providing the Persistent Data Storage in a Software Engineering Environment Using Java/COBRA and a DBMS

    NASA Technical Reports Server (NTRS)

    Dhaliwal, Swarn S.

    1997-01-01

    An investigation was undertaken to build the software foundation for the WHERE (Web-based Hyper-text Environment for Requirements Engineering) project. The TCM (Toolkit for Conceptual Modeling) was chosen as the foundation software for the WHERE project which aims to provide an environment for facilitating collaboration among geographically distributed people involved in the Requirements Engineering process. The TCM is a collection of diagram and table editors and has been implemented in the C++ programming language. The C++ implementation of the TCM was translated into Java in order to allow the editors to be used for building various functionality of the WHERE project; the WHERE project intends to use the Web as its communication back- bone. One of the limitations of the translated software (TcmJava), which militated against its use in the WHERE project, was persistent data management mechanisms which it inherited from the original TCM; it was designed to be used in standalone applications. Before TcmJava editors could be used as a part of the multi-user, geographically distributed applications of the WHERE project, a persistent storage mechanism must be built which would allow data communication over the Internet, using the capabilities of the Web. An approach involving features of Java, CORBA (Common Object Request Broker), the Web, a middle-ware (Java Relational Binding (JRB)), and a database server was used to build the persistent data management infrastructure for the WHERE project. The developed infrastructure allows a TcmJava editor to be downloaded and run from a network host by using a JDK 1.1 (Java Developer's Kit) compatible Web-browser. The aforementioned editor establishes connection with a server by using the ORB (Object Request Broker) software and stores/retrieves data in/from the server. The server consists of a CORBA object or objects depending upon whether the data is to be made persistent on a single server or multiple servers. The CORBA object providing the persistent data server is implemented using the Java progranu-ning language. It uses the JRB to store/retrieve data in/from a relational database server. The persistent data management system provides transaction and user management facilities which allow multi-user, distributed access to the stored data in a secure manner.

  2. Teen smoking cessation help via the Internet: a survey of search engines.

    PubMed

    Edwards, Christine C; Elliott, Sean P; Conway, Terry L; Woodruff, Susan I

    2003-07-01

    The objective of this study was to assess Web sites related to teen smoking cessation on the Internet. Seven Internet search engines were searched using the keywords teen quit smoking. The top 20 hits from each search engine were reviewed and categorized. The keywords teen quit smoking produced between 35 and 400,000 hits depending on the search engine. Of 140 potential hits, 62% were active, unique sites; 85% were listed by only one search engine; and 40% focused on cessation. Findings suggest that legitimate on-line smoking cessation help for teens is constrained by search engine choice and the amount of time teens spend looking through potential sites. Resource listings should be updated regularly. Smoking cessation Web sites need to be picked up on multiple search engine searches. Further evaluation of smoking cessation Web sites need to be conducted to identify the most effective help for teens.

  3. Programming for physicians: A free online course.

    PubMed

    Kubben, Pieter L

    2016-01-01

    This article is an introduction for clinical readers into programming and computational thinking using the programming language Python. Exercises can be done completely online without any need for installation of software. Participants will be taught the fundamentals of programming, which are necessarily independent of the sort of application (stand-alone, web, mobile, engineering, and statistical/machine learning) that is to be developed afterward.

  4. Social Dimension of Web 2.0 in Engineering Education

    ERIC Educational Resources Information Center

    Ahrens, Andreas; Zascerinska, Jelena

    2010-01-01

    Contemporary engineers need to become more cognizant and more responsive to the emerging needs of the market for engineering and technology services. Social dimension of Web 2.0 which penetrates our society more thoroughly with the availability of broadband services has the potential to contribute decisively to the sustainable development of…

  5. Social Dimension of Web 2.0 in Engineering Education: Students' View

    ERIC Educational Resources Information Center

    Zascerinska, Jelena; Bassus, Olaf; Ahrens, Andreas

    2010-01-01

    Contemporary engineers need to become more cognizant and more responsive to the emerging needs of the market for engineering and technology services. Social dimension of Web 2.0 which penetrates our society more thoroughly with the availability of broadband services has the potential to contribute decisively to the sustainable development of…

  6. Effects of Web-Based Interactive Modules on Engineering Students' Learning Motivations

    ERIC Educational Resources Information Center

    Bai, Haiyan; Aman, Amjad; Xu, Yunjun; Orlovskaya, Nina; Zhou, Mingming

    2016-01-01

    The purpose of this study is to assess the impact of a newly developed modules, Interactive Web-Based Visualization Tools for Gluing Undergraduate Fuel Cell Systems Courses system (IGLU), on learning motivations of engineering students using two samples (n[subscript 1] = 144 and n[subscript 2] = 135) from senior engineering classes. The…

  7. Finding Information on the World Wide Web: The Retrieval Effectiveness of Search Engines.

    ERIC Educational Resources Information Center

    Pathak, Praveen; Gordon, Michael

    1999-01-01

    Describes a study that examined the effectiveness of eight search engines for the World Wide Web. Calculated traditional information-retrieval measures of recall and precision at varying numbers of retrieved documents to use as the bases for statistical comparisons of retrieval effectiveness. Also examined the overlap between search engines.…

  8. Optoelectronic Devices, Sensors, Communication and Multimedia, Photonics Applications and Web Engineering, Wilga, May 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    This paper is the fourth part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with Optoelectronic Devices, Sensors, Communication and Multimedia (Video and Audio) technologies. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the Jubilee XXXth SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET tokamak and pi-of-the sky experiments development. The symposium is an annual summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also a great occasion for SPIE, IEEE, OSA and PSP students to meet together in a large group spanning the whole country with guests from this part of Europe. A digest of Wilga references is presented [1-270].

  9. Biomedical, Artificial Intelligence, and DNA Computing Photonics Applications and Web Engineering, Wilga, May 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    This paper is the fifth part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with Biomedical, Artificial Intelligence and DNA Computing technologies. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the Jubilee XXXth SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET tokamak and pi-of-the sky experiments development. The symposium is an annual summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also a great occasion for SPIE, IEEE, OSA and PSP students to meet together in a large group spanning the whole country with guests from this part of Europe. A digest of Wilga references is presented [1-270].

  10. A novel architecture for information retrieval system based on semantic web

    NASA Astrophysics Data System (ADS)

    Zhang, Hui

    2011-12-01

    Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

  11. Web Design for Space Operations: An Overview of the Challenges and New Technologies Used in Developing and Operating Web-Based Applications in Real-Time Operational Support Onboard the International Space Station, in Astronaut Mission Planning and Mission Control Operations

    NASA Technical Reports Server (NTRS)

    Khan, Ahmed

    2010-01-01

    The International Space Station (ISS) Operations Planning Team, Mission Control Centre and Mission Automation Support Network (MAS) have all evolved over the years to use commercial web-based technologies to create a configurable electronic infrastructure to manage the complex network of real-time planning, crew scheduling, resource and activity management as well as onboard document and procedure management required to co-ordinate ISS assembly, daily operations and mission support. While these Web technologies are classified as non-critical in nature, their use is part of an essential backbone of daily operations on the ISS and allows the crew to operate the ISS as a functioning science laboratory. The rapid evolution of the internet from 1998 (when ISS assembly began) to today, along with the nature of continuous manned operations in space, have presented a unique challenge in terms of software engineering and system development. In addition, the use of a wide array of competing internet technologies (including commercial technologies such as .NET and JAVA ) and the special requirements of having to support this network, both nationally among various control centres for International Partners (IPs), as well as onboard the station itself, have created special challenges for the MCC Web Tools Development Team, software engineers and flight controllers, who implement and maintain this system. This paper presents an overview of some of these operational challenges, and the evolving nature of the solutions and the future use of COTS based rich internet technologies in manned space flight operations. In particular this paper will focus on the use of Microsoft.s .NET API to develop Web-Based Operational tools, the use of XML based service oriented architectures (SOA) that needed to be customized to support Mission operations, the maintenance of a Microsoft IIS web server onboard the ISS, The OpsLan, functional-oriented Web Design with AJAX

  12. A Web Based Collaborative Design Environment for Spacecraft

    NASA Technical Reports Server (NTRS)

    Dunphy, Julia

    1998-01-01

    In this era of shrinking federal budgets in the USA we need to dramatically improve our efficiency in the spacecraft engineering design process. We have come up with a method which captures much of the experts' expertise in a dataflow design graph: Seamlessly connectable set of local and remote design tools; Seamlessly connectable web based design tools; and Web browser interface to the developing spacecraft design. We have recently completed our first web browser interface and demonstrated its utility in the design of an aeroshell using design tools located at web sites at three NASA facilities. Multiple design engineers and managers are now able to interrogate the design engine simultaneously and find out what the design looks like at any point in the design cycle, what its parameters are, and how it reacts to adverse space environments.

  13. Can Interactive Web-Based CAD Tools Improve the Learning of Engineering Drawing? A Case Study

    ERIC Educational Resources Information Center

    Pando Cerra, Pablo; Suárez González, Jesús M.; Busto Parra, Bernardo; Rodríguez Ortiz, Diana; Álvarez Peñín, Pedro I.

    2014-01-01

    Many current Web-based learning environments facilitate the theoretical teaching of a subject but this may not be sufficient for those disciplines that require a significant use of graphic mechanisms to resolve problems. This research study looks at the use of an environment that can help students learn engineering drawing with Web-based CAD…

  14. Information Retrieval for Education: Making Search Engines Language Aware

    ERIC Educational Resources Information Center

    Ott, Niels; Meurers, Detmar

    2010-01-01

    Search engines have been a major factor in making the web the successful and widely used information source it is today. Generally speaking, they make it possible to retrieve web pages on a topic specified by the keywords entered by the user. Yet web searching currently does not take into account which of the search results are comprehensible for…

  15. Uncovering the Hidden Web, Part I: Finding What the Search Engines Don't. ERIC Digest.

    ERIC Educational Resources Information Center

    Mardis, Marcia

    Currently, the World Wide Web contains an estimated 7.4 million sites (OCLC, 2001). Yet even the most experienced searcher, using the most robust search engines, can access only about 16% of these pages (Dahn, 2001). The other 84% of the publicly available information on the Web is referred to as the "hidden,""invisible," or…

  16. Adding a Visualization Feature to Web Search Engines: It’s Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t helpmore » but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.« less

  17. An open-source, mobile-friendly search engine for public medical knowledge.

    PubMed

    Samwald, Matthias; Hanbury, Allan

    2014-01-01

    The World Wide Web has become an important source of information for medical practitioners. To complement the capabilities of currently available web search engines we developed FindMeEvidence, an open-source, mobile-friendly medical search engine. In a preliminary evaluation, the quality of results from FindMeEvidence proved to be competitive with those from TRIP Database, an established, closed-source search engine for evidence-based medicine.

  18. Technical development of PubMed interact: an improved interface for MEDLINE/PubMed searches.

    PubMed

    Muin, Michael; Fontelo, Paul

    2006-11-03

    The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications.

  19. FDRAnalysis: a tool for the integrated analysis of tandem mass spectrometry identification results from multiple search engines.

    PubMed

    Wedge, David C; Krishna, Ritesh; Blackhurst, Paul; Siepen, Jennifer A; Jones, Andrew R; Hubbard, Simon J

    2011-04-01

    Confident identification of peptides via tandem mass spectrometry underpins modern high-throughput proteomics. This has motivated considerable recent interest in the postprocessing of search engine results to increase confidence and calculate robust statistical measures, for example through the use of decoy databases to calculate false discovery rates (FDR). FDR-based analyses allow for multiple testing and can assign a single confidence value for both sets and individual peptide spectrum matches (PSMs). We recently developed an algorithm for combining the results from multiple search engines, integrating FDRs for sets of PSMs made by different search engine combinations. Here we describe a web-server and a downloadable application that makes this routinely available to the proteomics community. The web server offers a range of outputs including informative graphics to assess the confidence of the PSMs and any potential biases. The underlying pipeline also provides a basic protein inference step, integrating PSMs into protein ambiguity groups where peptides can be matched to more than one protein. Importantly, we have also implemented full support for the mzIdentML data standard, recently released by the Proteomics Standards Initiative, providing users with the ability to convert native formats to mzIdentML files, which are available to download.

  20. FDRAnalysis: A tool for the integrated analysis of tandem mass spectrometry identification results from multiple search engines

    PubMed Central

    Wedge, David C; Krishna, Ritesh; Blackhurst, Paul; Siepen, Jennifer A; Jones, Andrew R.; Hubbard, Simon J.

    2013-01-01

    Confident identification of peptides via tandem mass spectrometry underpins modern high-throughput proteomics. This has motivated considerable recent interest in the post-processing of search engine results to increase confidence and calculate robust statistical measures, for example through the use of decoy databases to calculate false discovery rates (FDR). FDR-based analyses allow for multiple testing and can assign a single confidence value for both sets and individual peptide spectrum matches (PSMs). We recently developed an algorithm for combining the results from multiple search engines, integrating FDRs for sets of PSMs made by different search engine combinations. Here we describe a web-server, and a downloadable application, which makes this routinely available to the proteomics community. The web server offers a range of outputs including informative graphics to assess the confidence of the PSMs and any potential biases. The underlying pipeline provides a basic protein inference step, integrating PSMs into protein ambiguity groups where peptides can be matched to more than one protein. Importantly, we have also implemented full support for the mzIdentML data standard, recently released by the Proteomics Standards Initiative, providing users with the ability to convert native formats to mzIdentML files, which are available to download. PMID:21222473

  1. MetaSEEk: a content-based metasearch engine for images

    NASA Astrophysics Data System (ADS)

    Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu

    1997-12-01

    Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.

  2. Automating Information Discovery Within the Invisible Web

    NASA Astrophysics Data System (ADS)

    Sweeney, Edwina; Curran, Kevin; Xie, Ermai

    A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.

  3. The Impact of Subject Indexes on Semantic Indeterminacy in Enterprise Document Retrieval

    ERIC Educational Resources Information Center

    Schymik, Gregory

    2012-01-01

    Ample evidence exists to support the conclusion that enterprise search is failing its users. This failure is costing corporate America billions of dollars every year. Most enterprise search engines are built using web search engines as their foundations. These search engines are optimized for web use and are inadequate when used inside the…

  4. Practical Tips and Strategies for Finding Information on the Internet.

    ERIC Educational Resources Information Center

    Armstrong, Rhonda; Flanagan, Lynn

    This paper presents the most important concepts and techniques to use in successfully searching the major World Wide Web search engines and directories, explains the basics of how search engines work, and describes what is included in their indexes. Following an introduction that gives an overview of Web directories and search engines, the first…

  5. www.teld.net: Online Courseware Engine for Teaching by Examples and Learning by Doing.

    ERIC Educational Resources Information Center

    Huang, G. Q.; Shen, B.; Mak, K. L.

    2001-01-01

    Describes TELD (Teaching by Examples and Learning by Doing), a Web-based online courseware engine for higher education. Topics include problem-based learning; project-based learning; case methods; TELD as a Web server; course materials; TELD as a search engine; and TELD as an online virtual classroom for electronic delivery of electronic…

  6. Curating the Web: Building a Google Custom Search Engine for the Arts

    ERIC Educational Resources Information Center

    Hennesy, Cody; Bowman, John

    2008-01-01

    Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…

  7. Federated Search and the Library Web Site: A Study of Association of Research Libraries Member Web Sites

    ERIC Educational Resources Information Center

    Williams, Sarah C.

    2010-01-01

    The purpose of this study was to investigate how federated search engines are incorporated into the Web sites of libraries in the Association of Research Libraries. In 2009, information was gathered for each library in the Association of Research Libraries with a federated search engine. This included the name of the federated search service and…

  8. The Effects of Web-Based and Face-to-Face Discussion on Computer Engineering Majors' Performance on the Karnaugh Map

    ERIC Educational Resources Information Center

    Hung, Yen-Chu

    2011-01-01

    This study investigates the different effects of web-based and face-to-face discussion on computer engineering majors' performance using the Karnaugh map in digital logic design. Pretest and posttest scores for two treatment groups (web-based discussion and face-to-face discussion) and a control group were compared and subjected to covariance…

  9. Drexel at TREC 2014 Federated Web Search Track

    DTIC Science & Technology

    2014-11-01

    of its input RS results. 1. INTRODUCTION Federated Web Search is the task of searching multiple search engines simultaneously and combining their...or distributed properly[5]. The goal of RS is then, for a given query, to select only the most promising search engines from all those available. Most...result pages of 149 search engines . 4000 queries are used in building the sample set. As a part of the Vertical Selection task, search engines are

  10. An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.

    PubMed

    Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J

    2002-01-01

    Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.

  11. E-TALEN: a web tool to design TALENs for genome engineering.

    PubMed

    Heigwer, Florian; Kerr, Grainne; Walther, Nike; Glaeser, Kathrin; Pelz, Oliver; Breinig, Marco; Boutros, Michael

    2013-11-01

    Use of transcription activator-like effector nucleases (TALENs) is a promising new technique in the field of targeted genome engineering, editing and reverse genetics. Its applications span from introducing knockout mutations to endogenous tagging of proteins and targeted excision repair. Owing to this wide range of possible applications, there is a need for fast and user-friendly TALEN design tools. We developed E-TALEN (http://www.e-talen.org), a web-based tool to design TALENs for experiments of varying scale. E-TALEN enables the design of TALENs against a single target or a large number of target genes. We significantly extended previously published design concepts to consider genomic context and different applications. E-TALEN guides the user through an end-to-end design process of de novo TALEN pairs, which are specific to a certain sequence or genomic locus. Furthermore, E-TALEN offers a functionality to predict targeting and specificity for existing TALENs. Owing to the computational complexity of many of the steps in the design of TALENs, particular emphasis has been put on the implementation of fast yet accurate algorithms. We implemented a user-friendly interface, from the input parameters to the presentation of results. An additional feature of E-TALEN is the in-built sequence and annotation database available for many organisms, including human, mouse, zebrafish, Drosophila and Arabidopsis, which can be extended in the future.

  12. The Geogenomic Mutational Atlas of Pathogens (GoMAP) Web System

    PubMed Central

    Sargeant, David P.; Hedden, Michael W.; Deverasetty, Sandeep; Strong, Christy L.; Alaniz, Izua J.; Bartlett, Alexandria N.; Brandon, Nicholas R.; Brooks, Steven B.; Brown, Frederick A.; Bufi, Flaviona; Chakarova, Monika; David, Roxanne P.; Dobritch, Karlyn M.; Guerra, Horacio P.; Levit, Kelvy S.; Mathew, Kiran R.; Matti, Ray; Maza, Dorothea Q.; Mistry, Sabyasachy; Novakovic, Nemanja; Pomerantz, Austin; Rafalski, Timothy F.; Rathnayake, Viraj; Rezapour, Noura; Ross, Christian A.; Schooler, Steve G.; Songao, Sarah; Tuggle, Sean L.; Wing, Helen J.; Yousif, Sandy; Schiller, Martin R.

    2014-01-01

    We present a new approach for pathogen surveillance we call Geogenomics. Geogenomics examines the geographic distribution of the genomes of pathogens, with a particular emphasis on those mutations that give rise to drug resistance. We engineered a new web system called Geogenomic Mutational Atlas of Pathogens (GoMAP) that enables investigation of the global distribution of individual drug resistance mutations. As a test case we examined mutations associated with HIV resistance to FDA-approved antiretroviral drugs. GoMAP-HIV makes use of existing public drug resistance and HIV protein sequence data to examine the distribution of 872 drug resistance mutations in ∼502,000 sequences for many countries in the world. We also implemented a broadened classification scheme for HIV drug resistance mutations. Several patterns for geographic distributions of resistance mutations were identified by visual mining using this web tool. GoMAP-HIV is an open access web application available at http://www.bio-toolkit.com/GoMap/project/ PMID:24675726

  13. The Geogenomic Mutational Atlas of Pathogens (GoMAP) web system.

    PubMed

    Sargeant, David P; Hedden, Michael W; Deverasetty, Sandeep; Strong, Christy L; Alaniz, Izua J; Bartlett, Alexandria N; Brandon, Nicholas R; Brooks, Steven B; Brown, Frederick A; Bufi, Flaviona; Chakarova, Monika; David, Roxanne P; Dobritch, Karlyn M; Guerra, Horacio P; Levit, Kelvy S; Mathew, Kiran R; Matti, Ray; Maza, Dorothea Q; Mistry, Sabyasachy; Novakovic, Nemanja; Pomerantz, Austin; Rafalski, Timothy F; Rathnayake, Viraj; Rezapour, Noura; Ross, Christian A; Schooler, Steve G; Songao, Sarah; Tuggle, Sean L; Wing, Helen J; Yousif, Sandy; Schiller, Martin R

    2014-01-01

    We present a new approach for pathogen surveillance we call Geogenomics. Geogenomics examines the geographic distribution of the genomes of pathogens, with a particular emphasis on those mutations that give rise to drug resistance. We engineered a new web system called Geogenomic Mutational Atlas of Pathogens (GoMAP) that enables investigation of the global distribution of individual drug resistance mutations. As a test case we examined mutations associated with HIV resistance to FDA-approved antiretroviral drugs. GoMAP-HIV makes use of existing public drug resistance and HIV protein sequence data to examine the distribution of 872 drug resistance mutations in ∼ 502,000 sequences for many countries in the world. We also implemented a broadened classification scheme for HIV drug resistance mutations. Several patterns for geographic distributions of resistance mutations were identified by visual mining using this web tool. GoMAP-HIV is an open access web application available at http://www.bio-toolkit.com/GoMap/project/

  14. Datamonkey 2.0: a modern web application for characterizing selective and other evolutionary processes.

    PubMed

    Weaver, Steven; Shank, Stephen D; Spielman, Stephanie J; Li, Michael; Muse, Spencer V; Kosakovsky Pond, Sergei L

    2018-01-02

    Inference of how evolutionary forces have shaped extant genetic diversity is a cornerstone of modern comparative sequence analysis. Advances in sequence generation and increased statistical sophistication of relevant methods now allow researchers to extract ever more evolutionary signal from the data, albeit at an increased computational cost. Here, we announce the release of Datamonkey 2.0, a completely re-engineered version of the Datamonkey web-server for analyzing evolutionary signatures in sequence data. For this endeavor, we leveraged recent developments in open-source libraries that facilitate interactive, robust, and scalable web application development. Datamonkey 2.0 provides a carefully curated collection of methods for interrogating coding-sequence alignments for imprints of natural selection, packaged as a responsive (i.e. can be viewed on tablet and mobile devices), fully interactive, and API-enabled web application. To complement Datamonkey 2.0, we additionally release HyPhy Vision, an accompanying JavaScript application for visualizing analysis results. HyPhy Vision can also be used separately from Datamonkey 2.0 to visualize locally-executed HyPhy analyses. Together, Datamonkey 2.0 and HyPhy Vision showcase how scientific software development can benefit from general-purpose open-source frameworks. Datamonkey 2.0 is freely and publicly available at http://www.datamonkey. org, and the underlying codebase is available from https://github.com/veg/datamonkey-js. © The Author 2018. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Bootstrapping and Maintaining Trust in the Cloud

    DTIC Science & Technology

    2016-12-01

    simultaneous cloud nodes. 1. INTRODUCTION The proliferation and popularity of infrastructure-as-a- service (IaaS) cloud computing services such as...Amazon Web Services and Google Compute Engine means more cloud tenants are hosting sensitive, private, and business critical data and applications in the...thousands of IaaS resources as they are elastically instantiated and terminated. Prior cloud trusted computing solutions address a subset of these features

  16. Programming for physicians: A free online course

    PubMed Central

    Kubben, Pieter L.

    2016-01-01

    This article is an introduction for clinical readers into programming and computational thinking using the programming language Python. Exercises can be done completely online without any need for installation of software. Participants will be taught the fundamentals of programming, which are necessarily independent of the sort of application (stand-alone, web, mobile, engineering, and statistical/machine learning) that is to be developed afterward. PMID:27127694

  17. CrossTalk. The Journal of Defense Software Engineering. Volume 26, Number 5

    DTIC Science & Technology

    2013-10-01

    to a backend domain managed by the cyber criminal. Mobile bots can perform piggybacking on legitimate applications and steal data by controlling...technology infrastructure for managing identities, interfaces (web and/or mobile ), and agreements with service providers. The necessary capabilities and...platforms of unknown or dubious origin, global access by mobile (and largely insecure) devices, eroded trust boundaries, and the possibility of malevolent

  18. Undergraduate Research Opportunities in OSS

    NASA Astrophysics Data System (ADS)

    Boldyreff, Cornelia; Capiluppi, Andrea; Knowles, Thomas; Munro, James

    Using Open Source Software (OSS) in undergraduate teaching in universities is now commonplace. Students use OSS applications and systems in their courses on programming, operating systems, DBMS, web development to name but a few. Studying OSS projects from both a product and a process view also forms part of the software engineering curriculum at various universities. Many students have taken part in OSS projects as well as developers.

  19. Globus | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Globus software services provide secure cancer research data transfer, synchronization, and sharing in distributed environments at large scale. These services can be integrated into applications and research data gateways, leveraging Globus identity management, single sign-on, search, and authorization capabilities. Globus Genomics integrates Globus with the Galaxy genomics workflow engine and Amazon Web Services to enable cancer genomics analysis that can elastically scale compute resources with demand.

  20. Methodologies for Crawler Based Web Surveys.

    ERIC Educational Resources Information Center

    Thelwall, Mike

    2002-01-01

    Describes Web survey methodologies used to study the content of the Web, and discusses search engines and the concept of crawling the Web. Highlights include Web page selection methodologies; obstacles to reliable automatic indexing of Web sites; publicly indexable pages; crawling parameters; and tests for file duplication. (Contains 62…

  1. The Timeseries Toolbox - A Web Application to Enable Accessible, Reproducible Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Veatch, W.; Friedman, D.; Baker, B.; Mueller, C.

    2017-12-01

    The vast majority of data analyzed by climate researchers are repeated observations of physical process or time series data. This data lends itself of a common set of statistical techniques and models designed to determine trends and variability (e.g., seasonality) of these repeated observations. Often, these same techniques and models can be applied to a wide variety of different time series data. The Timeseries Toolbox is a web application designed to standardize and streamline these common approaches to time series analysis and modeling with particular attention to hydrologic time series used in climate preparedness and resilience planning and design by the U. S. Army Corps of Engineers. The application performs much of the pre-processing of time series data necessary for more complex techniques (e.g. interpolation, aggregation). With this tool, users can upload any dataset that conforms to a standard template and immediately begin applying these techniques to analyze their time series data.

  2. POOL server: machine learning application for functional site prediction in proteins.

    PubMed

    Somarowthu, Srinivas; Ondrechen, Mary Jo

    2012-08-01

    We present an automated web server for partial order optimum likelihood (POOL), a machine learning application that combines computed electrostatic and geometric information for high-performance prediction of catalytic residues from 3D structures. Input features consist of THEMATICS electrostatics data and pocket information from ConCavity. THEMATICS measures deviation from typical, sigmoidal titration behavior to identify functionally important residues and ConCavity identifies binding pockets by analyzing the surface geometry of protein structures. Both THEMATICS and ConCavity (structure only) do not require the query protein to have any sequence or structure similarity to other proteins. Hence, POOL is applicable to proteins with novel folds and engineered proteins. As an additional option for cases where sequence homologues are available, users can include evolutionary information from INTREPID for enhanced accuracy in site prediction. The web site is free and open to all users with no login requirements at http://www.pool.neu.edu. m.ondrechen@neu.edu Supplementary data are available at Bioinformatics online.

  3. Function Point Analysis Depot

    NASA Technical Reports Server (NTRS)

    Muniz, R.; Martinez, El; Szafran, J.; Dalton, A.

    2011-01-01

    The Function Point Analysis (FPA) Depot is a web application originally designed by one of the NE-C3 branch's engineers, Jamie Szafran, and created specifically for the Software Development team of the Launch Control Systems (LCS) project. The application consists of evaluating the work of each developer to be able to get a real estimate of the hours that is going to be assigned to a specific task of development. The Architect Team had made design change requests for the depot to change the schema of the application's information; that information, changed in the database, needed to be changed in the graphical user interface (GUI) (written in Ruby on Rails (RoR and the web service/server side in Java to match the database changes. These changes were made by two interns from NE-C, Ricardo Muniz from NE-C3, who made all the schema changes for the GUI in RoR and Edwin Martinez, from NE-C2, who made all the changes in the Java side.

  4. Detection and Monitoring of Improvised Explosive Device Education Networks through the World Wide Web

    DTIC Science & Technology

    2009-06-01

    search engines are not up to this task, as they have been optimized to catalog information quickly and efficiently for user ease of access while promoting retail commerce at the same time. This thesis presents a performance analysis of a new search engine algorithm designed to help find IED education networks using the Nutch open-source search engine architecture. It reveals which web pages are more important via references from other web pages regardless of domain. In addition, this thesis discusses potential evaluation and monitoring techniques to be used in conjunction

  5. EngineSim: Turbojet Engine Simulator Adapted for High School Classroom Use

    NASA Technical Reports Server (NTRS)

    Petersen, Ruth A.

    2001-01-01

    EngineSim is an interactive educational computer program that allows users to explore the effect of engine operation on total aircraft performance. The software is supported by a basic propulsion web site called the Beginner's Guide to Propulsion, which includes educator-created, web-based activities for the classroom use of EngineSim. In addition, educators can schedule videoconferencing workshops in which EngineSim's creator demonstrates the software and discusses its use in the educational setting. This software is a product of NASA Glenn Research Center's Learning Technologies Project, an educational outreach initiative within the High Performance Computing and Communications Program.

  6. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  7. Semantically-enabled sensor plug & play for the sensor web.

    PubMed

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC's Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research.

  8. Semantically-Enabled Sensor Plug & Play for the Sensor Web

    PubMed Central

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC’s Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research. PMID:22164033

  9. Creating a Prototype Web Application for Spacecraft Real-Time Data Visualization on Mobile Devices

    NASA Technical Reports Server (NTRS)

    Lang, Jeremy S.; Irving, James R.

    2014-01-01

    Mobile devices (smart phones, tablets) have become commonplace among almost all sectors of the workforce, especially in the technical and scientific communities. These devices provide individuals the ability to be constantly connected to any area of interest they may have, whenever and wherever they are located. The Huntsville Operations Support Center (HOSC) is attempting to take advantage of this constant connectivity to extend the data visualization component of the Payload Operations and Integration Center (POIC) to a person's mobile device. POIC users currently have a rather unique capability to create custom user interfaces in order to view International Space Station (ISS) payload health and status telemetry. These displays are used at various console positions within the POIC. The Software Engineering team has created a Mobile Display capability that will allow authenticated users to view the same displays created for the console positions on the mobile device of their choice. Utilizing modern technologies including ASP.net, JavaScript, and HTML5, we have created a web application that renders the user's displays in any modern desktop or mobile web browser, regardless of the operating system on the device. Additionally, the application is device aware which enables it to render its configuration and selection menus with themes that correspond to the particular device. The Mobile Display application uses a communication mechanism known as signalR to push updates to the web client. This communication mechanism automatically detects the best communication protocol between the client and server and also manages disconnections and reconnections of the client to the server. One benefit of this application is that the user can monitor important telemetry even while away from their console position. If expanded to the scientific community, this application would allow a scientist to view a snapshot of the state of their particular experiment at any time or place. Because the web application renders the displays that can currently be created with the POIC ground system, the user can tailor their displays for a particular device using tools that they are already trained to use.

  10. [Development of domain specific search engines].

    PubMed

    Takai, T; Tokunaga, M; Maeda, K; Kaminuma, T

    2000-01-01

    As cyber space exploding in a pace that nobody has ever imagined, it becomes very important to search cyber space efficiently and effectively. One solution to this problem is search engines. Already a lot of commercial search engines have been put on the market. However these search engines respond with such cumbersome results that domain specific experts can not tolerate. Using a dedicate hardware and a commercial software called OpenText, we have tried to develop several domain specific search engines. These engines are for our institute's Web contents, drugs, chemical safety, endocrine disruptors, and emergent response for chemical hazard. These engines have been on our Web site for testing.

  11. Getting to the top of Google: search engine optimization.

    PubMed

    Maley, Catherine; Baum, Neil

    2010-01-01

    Search engine optimization is the process of making your Web site appear at or near the top of popular search engines such as Google, Yahoo, and MSN. This is not done by luck or knowing someone working for the search engines but by understanding the process of how search engines select Web sites for placement on top or on the first page. This article will review the process and provide methods and techniques to use to have your site rated at the top or very near the top.

  12. Management system for the SND experiments

    NASA Astrophysics Data System (ADS)

    Pugachev, K.; Korol, A.

    2017-09-01

    A new management system for the SND detector experiments (at VEPP-2000 collider in Novosibirsk) is developed. We describe here the interaction between a user and the SND databases. These databases contain experiment configuration, conditions and metadata. The new system is designed in client-server architecture. It has several logical layers corresponding to the users roles. A new template engine is created. A web application is implemented using Node.js framework. At the time the application provides: showing and editing configuration; showing experiment metadata and experiment conditions data index; showing SND log (prototype).

  13. A new service-oriented grid-based method for AIoT application and implementation

    NASA Astrophysics Data System (ADS)

    Zou, Yiqin; Quan, Li

    2017-07-01

    The traditional three-layer Internet of things (IoT) model, which includes physical perception layer, information transferring layer and service application layer, cannot express complexity and diversity in agricultural engineering area completely. It is hard to categorize, organize and manage the agricultural things with these three layers. Based on the above requirements, we propose a new service-oriented grid-based method to set up and build the agricultural IoT. Considering the heterogeneous, limitation, transparency and leveling attributes of agricultural things, we propose an abstract model for all agricultural resources. This model is service-oriented and expressed with Open Grid Services Architecture (OGSA). Information and data of agricultural things were described and encapsulated by using XML in this model. Every agricultural engineering application will provide service by enabling one application node in this service-oriented grid. Description of Web Service Resource Framework (WSRF)-based Agricultural Internet of Things (AIoT) and the encapsulation method were also discussed in this paper for resource management in this model.

  14. Integrating Ecosystem Engineering and Food Web Ecology: Testing the Effect of Biogenic Reefs on the Food Web of a Soft-Bottom Intertidal Area

    PubMed Central

    De Smet, Bart; Fournier, Jérôme; De Troch, Marleen; Vincx, Magda; Vanaverbeke, Jan

    2015-01-01

    The potential of ecosystem engineers to modify the structure and dynamics of food webs has recently been hypothesised from a conceptual point of view. Empirical data on the integration of ecosystem engineers and food webs is however largely lacking. This paper investigates the hypothesised link based on a field sampling approach of intertidal biogenic aggregations created by the ecosystem engineer Lanice conchilega (Polychaeta, Terebellidae). The aggregations are known to have a considerable impact on the physical and biogeochemical characteristics of their environment and subsequently on the abundance and biomass of primary food sources and the macrofaunal (i.e. the macro-, hyper- and epibenthos) community. Therefore, we hypothesise that L. conchilega aggregations affect the structure, stability and isotopic niche of the consumer assemblage of a soft-bottom intertidal food web. Primary food sources and the bentho-pelagic consumer assemblage of a L. conchilega aggregation and a control area were sampled on two soft-bottom intertidal areas along the French coast and analysed for their stable isotopes. Despite the structural impacts of the ecosystem engineer on the associated macrofaunal community, the presence of L. conchilega aggregations only has a minor effect on the food web structure of soft-bottom intertidal areas. The isotopic niche width of the consumer communities of the L. conchilega aggregations and control areas are highly similar, implying that consumer taxa do not shift their diet when feeding in a L. conchilega aggregation. Besides, species packing and hence trophic redundancy were not affected, pointing to an unaltered stability of the food web in the presence of L. conchilega. PMID:26496349

  15. Developing Creativity and Problem-Solving Skills of Engineering Students: A Comparison of Web- and Pen-and-Paper-Based Approaches

    ERIC Educational Resources Information Center

    Valentine, Andrew; Belski, Iouri; Hamilton, Margaret

    2017-01-01

    Problem-solving is a key engineering skill, yet is an area in which engineering graduates underperform. This paper investigates the potential of using web-based tools to teach students problem-solving techniques without the need to make use of class time. An idea generation experiment involving 90 students was designed. Students were surveyed…

  16. Index Compression and Efficient Query Processing in Large Web Search Engines

    ERIC Educational Resources Information Center

    Ding, Shuai

    2013-01-01

    The inverted index is the main data structure used by all the major search engines. Search engines build an inverted index on their collection to speed up query processing. As the size of the web grows, the length of the inverted list structures, which can easily grow to hundreds of MBs or even GBs for common terms (roughly linear in the size of…

  17. An Analysis of Web Image Queries for Search.

    ERIC Educational Resources Information Center

    Pu, Hsiao-Tieh

    2003-01-01

    Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)

  18. Automatic generation of Web mining environments

    NASA Astrophysics Data System (ADS)

    Cibelli, Maurizio; Costagliola, Gennaro

    1999-02-01

    The main problem related to the retrieval of information from the world wide web is the enormous number of unstructured documents and resources, i.e., the difficulty of locating and tracking appropriate sources. This paper presents a web mining environment (WME), which is capable of finding, extracting and structuring information related to a particular domain from web documents, using general purpose indices. The WME architecture includes a web engine filter (WEF), to sort and reduce the answer set returned by a web engine, a data source pre-processor (DSP), which processes html layout cues in order to collect and qualify page segments, and a heuristic-based information extraction system (HIES), to finally retrieve the required data. Furthermore, we present a web mining environment generator, WMEG, that allows naive users to generate a WME specific to a given domain by providing a set of specifications.

  19. ICTNET at Web Track 2009 Diversity task

    DTIC Science & Technology

    2009-11-01

    performance. On the World Wide Web, there exist many documents which represents several implicit subtopics. We used commerce search engines to gather those...documents. In this task, our work can be divided into five steps. First, we collect documents returned by commerce search engines , and considered

  20. Technical development of PubMed Interact: an improved interface for MEDLINE/PubMed searches

    PubMed Central

    Muin, Michael; Fontelo, Paul

    2006-01-01

    Background The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. Results PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. Conclusion PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications. PMID:17083729

  1. User Interface Composition with COTS-UI and Trading Approaches: Application for Web-Based Environmental Information Systems

    NASA Astrophysics Data System (ADS)

    Criado, Javier; Padilla, Nicolás; Iribarne, Luis; Asensio, Jose-Andrés

    Due to the globalization of the information and knowledge society on the Internet, modern Web-based Information Systems (WIS) must be flexible and prepared to be easily accessible and manageable in real-time. In recent times it has received a special interest the globalization of information through a common vocabulary (i.e., ontologies), and the standardized way in which information is retrieved on the Web (i.e., powerful search engines, and intelligent software agents). These same principles of globalization and standardization should also be valid for the user interfaces of the WIS, but they are built on traditional development paradigms. In this paper we present an approach to reduce the gap of globalization/standardization in the generation of WIS user interfaces by using a real-time "bottom-up" composition perspective with COTS-interface components (type interface widgets) and trading services.

  2. Meeting Reference Responsibilities through Library Web Sites.

    ERIC Educational Resources Information Center

    Adams, Michael

    2001-01-01

    Discusses library Web sites and explains some of the benefits when libraries make their sites into reference portals, linking them to other useful Web sites. Topics include print versus Web information sources; limitations of search engines; what Web sites to include, including criteria for inclusions; and organizing the sites. (LRW)

  3. Extracting Macroscopic Information from Web Links.

    ERIC Educational Resources Information Center

    Thelwall, Mike

    2001-01-01

    Discussion of Web-based link analysis focuses on an evaluation of Ingversen's proposed external Web Impact Factor for the original use of the Web, namely the interlinking of academic research. Studies relationships between academic hyperlinks and research activities for British universities and discusses the use of search engines for Web link…

  4. The Design and Development of a Web-Interface for the Software Engineering Automation System

    DTIC Science & Technology

    2001-09-01

    application on the Internet. 14. SUBJECT TERMS Computer Aided Prototyping, Real Time Systems , Java 15. NUMBER OF...difficult. Developing the entire system only to find it does not meet the customer’s needs is a tremendous waste of time. Real - time systems need a...software prototyping is an iterative software development methodology utilized to improve the analysis and design of real - time systems [2]. One

  5. Commercial Building Energy Saver, API

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon

    2015-08-27

    The CBES API provides Application Programming Interface to a suite of functions to improve energy efficiency of buildings, including building energy benchmarking, preliminary retrofit analysis using a pre-simulation database DEEP, and detailed retrofit analysis using energy modeling with the EnergyPlus simulation engine. The CBES API is used to power the LBNL CBES Web App. It can be adopted by third party developers and vendors into their software tools and platforms.

  6. LIBRA-WA: a web application for ligand binding site detection and protein function recognition.

    PubMed

    Toti, Daniele; Viet Hung, Le; Tortosa, Valentina; Brandi, Valentina; Polticelli, Fabio

    2018-03-01

    Recently, LIBRA, a tool for active/ligand binding site prediction, was described. LIBRA's effectiveness was comparable to similar state-of-the-art tools; however, its scoring scheme, output presentation, dependence on local resources and overall convenience were amenable to improvements. To solve these issues, LIBRA-WA, a web application based on an improved LIBRA engine, has been developed, featuring a novel scoring scheme consistently improving LIBRA's performance, and a refined algorithm that can identify binding sites hosted at the interface between different subunits. LIBRA-WA also sports additional functionalities like ligand clustering and a completely redesigned interface for an easier analysis of the output. Extensive tests on 373 apoprotein structures indicate that LIBRA-WA is able to identify the biologically relevant ligand/ligand binding site in 357 cases (∼96%), with the correct prediction ranking first in 349 cases (∼98% of the latter, ∼94% of the total). The earlier stand-alone tool has also been updated and dubbed LIBRA+, by integrating LIBRA-WA's improved engine for cross-compatibility purposes. LIBRA-WA and LIBRA+ are available at: http://www.computationalbiology.it/software.html. polticel@uniroma3.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  7. An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.

    PubMed Central

    Zweigenbaum, P.; Darmoni, S. J.; Grabar, N.; Douyère, M.; Benichou, J.

    2002-01-01

    Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF. PMID:12463965

  8. Faculty Collaboration on Multidisciplinary Web-Based Education.

    ERIC Educational Resources Information Center

    Saad, Ashraf; Uskov, Vladimir L.; Cedercreutz, Kettil; Geonetta, Sam; Spille, Jack; Abel, Dick

    In 1998, faculty members at the University of Cincinnati started a project as an interdepartmental collaboration to investigate the use of World Wide Web-based instructional (WBI) tools. The project team included representatives from various areas such as information engineering technology, mechanical engineering technology, chemical technology,…

  9. Determining the Scope of Collection Development and Research Assistance for Cross-Disciplinary Areas: A Case Study of Two Contrasting Areas, Nanotechnology and Transportation Engineering

    ERIC Educational Resources Information Center

    Williamson, Jeanine M.; Han, Lee D.; Colon-Aguirre, Monica

    2009-01-01

    The study examined the extent of cross-disciplinarity in nanotechnology and transportation engineering research. Researchers in these two fields were determined from the web sites of the U.S. News and World Report top 100 schools in civil engineering and materials science. Web of Science searches for 2006 and 2007 articles were obtained and the…

  10. Representing Human Expertise by the OWL Web Ontology Language to Support Knowledge Engineering in Decision Support Systems.

    PubMed

    Ramzan, Asia; Wang, Hai; Buckingham, Christopher

    2014-01-01

    Clinical decision support systems (CDSSs) often base their knowledge and advice on human expertise. Knowledge representation needs to be in a format that can be easily understood by human users as well as supporting ongoing knowledge engineering, including evolution and consistency of knowledge. This paper reports on the development of an ontology specification for managing knowledge engineering in a CDSS for assessing and managing risks associated with mental-health problems. The Galatean Risk and Safety Tool, GRiST, represents mental-health expertise in the form of a psychological model of classification. The hierarchical structure was directly represented in the machine using an XML document. Functionality of the model and knowledge management were controlled using attributes in the XML nodes, with an accompanying paper manual for specifying how end-user tools should behave when interfacing with the XML. This paper explains the advantages of using the web-ontology language, OWL, as the specification, details some of the issues and problems encountered in translating the psychological model to OWL, and shows how OWL benefits knowledge engineering. The conclusions are that OWL can have an important role in managing complex knowledge domains for systems based on human expertise without impeding the end-users' understanding of the knowledge base. The generic classification model underpinning GRiST makes it applicable to many decision domains and the accompanying OWL specification facilitates its implementation.

  11. Font adaptive word indexing of modern printed documents.

    PubMed

    Marinai, Simone; Marino, Emanuele; Soda, Giovanni

    2006-08-01

    We propose an approach for the word-level indexing of modern printed documents which are difficult to recognize using current OCR engines. By means of word-level indexing, it is possible to retrieve the position of words in a document, enabling queries involving proximity of terms. Web search engines implement this kind of indexing, allowing users to retrieve Web pages on the basis of their textual content. Nowadays, digital libraries hold collections of digitized documents that can be retrieved either by browsing the document images or relying on appropriate metadata assembled by domain experts. Word indexing tools would therefore increase the access to these collections. The proposed system is designed to index homogeneous document collections by automatically adapting to different languages and font styles without relying on OCR engines for character recognition. The approach is based on three main ideas: the use of Self Organizing Maps (SOM) to perform unsupervised character clustering, the definition of one suitable vector-based word representation whose size depends on the word aspect-ratio, and the run-time alignment of the query word with indexed words to deal with broken and touching characters. The most appropriate applications are for processing modern printed documents (17th to 19th centuries) where current OCR engines are less accurate. Our experimental analysis addresses six data sets containing documents ranging from books of the 17th century to contemporary journals.

  12. MOLEonline: a web-based tool for analyzing channels, tunnels and pores (2018 update).

    PubMed

    Pravda, Lukáš; Sehnal, David; Toušek, Dominik; Navrátilová, Veronika; Bazgier, Václav; Berka, Karel; Svobodová Vareková, Radka; Koca, Jaroslav; Otyepka, Michal

    2018-04-30

    MOLEonline is an interactive, web-based application for the detection and characterization of channels (pores and tunnels) within biomacromolecular structures. The updated version of MOLEonline overcomes limitations of the previous version by incorporating the recently developed LiteMol Viewer visualization engine and providing a simple, fully interactive user experience. The application enables two modes of calculation: one is dedicated to the analysis of channels while the other was specifically designed for transmembrane pores. As the application can use both PDB and mmCIF formats, it can be leveraged to analyze a wide spectrum of biomacromolecular structures, e.g. stemming from NMR, X-ray and cryo-EM techniques. The tool is interconnected with other bioinformatics tools (e.g., PDBe, CSA, ChannelsDB, OPM, UniProt) to help both setup and the analysis of acquired results. MOLEonline provides unprecedented analytics for the detection and structural characterization of channels, as well as information about their numerous physicochemical features. Here we present the application of MOLEonline for structural analyses of α-hemolysin and transient receptor potential mucolipin 1 (TRMP1) pores. The MOLEonline application is freely available via the Internet at https://mole.upol.cz.

  13. Semantic similarity measures in the biomedical domain by leveraging a web search engine.

    PubMed

    Hsieh, Sheau-Ling; Chang, Wen-Yung; Chen, Chi-Huang; Weng, Yung-Ching

    2013-07-01

    Various researches in web related semantic similarity measures have been deployed. However, measuring semantic similarity between two terms remains a challenging task. The traditional ontology-based methodologies have a limitation that both concepts must be resided in the same ontology tree(s). Unfortunately, in practice, the assumption is not always applicable. On the other hand, if the corpus is sufficiently adequate, the corpus-based methodologies can overcome the limitation. Now, the web is a continuous and enormous growth corpus. Therefore, a method of estimating semantic similarity is proposed via exploiting the page counts of two biomedical concepts returned by Google AJAX web search engine. The features are extracted as the co-occurrence patterns of two given terms P and Q, by querying P, Q, as well as P AND Q, and the web search hit counts of the defined lexico-syntactic patterns. These similarity scores of different patterns are evaluated, by adapting support vector machines for classification, to leverage the robustness of semantic similarity measures. Experimental results validating against two datasets: dataset 1 provided by A. Hliaoutakis; dataset 2 provided by T. Pedersen, are presented and discussed. In dataset 1, the proposed approach achieves the best correlation coefficient (0.802) under SNOMED-CT. In dataset 2, the proposed method obtains the best correlation coefficient (SNOMED-CT: 0.705; MeSH: 0.723) with physician scores comparing with measures of other methods. However, the correlation coefficients (SNOMED-CT: 0.496; MeSH: 0.539) with coder scores received opposite outcomes. In conclusion, the semantic similarity findings of the proposed method are close to those of physicians' ratings. Furthermore, the study provides a cornerstone investigation for extracting fully relevant information from digitizing, free-text medical records in the National Taiwan University Hospital database.

  14. 49 CFR 571.5 - Matter incorporated by reference

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Printing Office, Washington, DC 20402 Illuminating Engineering Society of North America (IES), 120 Wall St.... Phone: 1-800-232-4636; Web: http://www.cdc.gov/nchs “Weight, Height, and Selected Body Dimensions of..., Pennsylvania 15096. Phone: 1-724-776-4841; Web: http://www.sae.org Society of Automotive Engineers (SAE...

  15. Getting Answers to Natural Language Questions on the Web.

    ERIC Educational Resources Information Center

    Radev, Dragomir R.; Libner, Kelsey; Fan, Weiguo

    2002-01-01

    Describes a study that investigated the use of natural language questions on Web search engines. Highlights include query languages; differences in search engine syntax; and results of logistic regression and analysis of variance that showed aspects of questions that predicted significantly different performances, including the number of words,…

  16. "Just the Answers, Please": Choosing a Web Search Service.

    ERIC Educational Resources Information Center

    Feldman, Susan

    1997-01-01

    Presents guidelines for selecting World Wide Web search engines. Real-life questions were used to test six search engines. Queries sought company information, product reviews, medical information, foreign information, technical reports, and current events. Compares performance and features of AltaVista, Excite, HotBot, Infoseek, Lycos, and Open…

  17. A Search Engine Features Comparison.

    ERIC Educational Resources Information Center

    Vorndran, Gerald

    Until recently, the World Wide Web (WWW) public access search engines have not included many of the advanced commands, options, and features commonly available with the for-profit online database user interfaces, such as DIALOG. This study evaluates the features and characteristics common to both types of search interfaces, examines the Web search…

  18. Next-Gen Search Engines

    ERIC Educational Resources Information Center

    Gupta, Amardeep

    2005-01-01

    Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…

  19. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    PubMed

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Results from a Web Impact Factor Crawler.

    ERIC Educational Resources Information Center

    Thelwall, Mike

    2001-01-01

    Discusses Web impact factors (WIFs), Web versions of the impact factors for journals, and how they can be calculated by using search engines. Highlights include HTML and document indexing; Web page links; a Web crawler designed for calculating WIFs; and WIFs for United Kingdom universities that measured research profiles or capability. (Author/LRW)

  1. Surfing the World Wide Web to Education Hot-Spots.

    ERIC Educational Resources Information Center

    Dyrli, Odvard Egil

    1995-01-01

    Provides a brief explanation of Web browsers and their use, as well as technical information for those considering access to the WWW (World Wide Web). Curriculum resources and addresses to useful Web sites are included. Sidebars show sample searches using Yahoo and Lycos search engines, and a list of recommended Web resources. (JKP)

  2. What do patients know about their low back pain? An analysis of the quality of information available on the Internet.

    PubMed

    Galbusera, Fabio; Brayda-Bruno, Marco; Freutel, Maren; Seitz, Andreas; Steiner, Malte; Wehrle, Esther; Wilke, Hans-Joachim

    2012-01-01

    Previous surveys showed a poor quality of the web sites providing health information about low back pain. However, the rapid and continuous evolution of the Internet content may question the current validity of those investigations. The present study is aimed to quantitatively assess the quality of the Internet information about low back pain retrieved with the most commonly employed search engines. An Internet search with the keywords "low back pain" has been performed with Google, Yahoo!® and Bing™ in the English language. The top 30 hits obtained with each search engine were evaluated by five independent raters and averaged following criteria derived from previous works. All search results were categorized as declaring compliant to a quality standard for health information (e.g. HONCode) or not and based on the web site type (Institutional, Free informative, Commercial, News, Social Network, Unknown). The quality of the hits retrieved by the three search engines was extremely similar. The web sites had a clear purpose, were easy to navigate, and mostly lacked in validity and quality of the provided links. The conformity to a quality standard was correlated with a marked greater quality of the web sites in all respects. Institutional web sites had the best validity and ease of use. Free informative web sites had good quality but a markedly lower validity compared to Institutional websites. Commercial web sites provided more biased information. News web sites were well designed and easy to use, but lacked in validity. The average quality of the hits retrieved by the most commonly employed search engines could be defined as satisfactory and favorably comparable with previous investigations. Awareness of the user about checking the quality of the information remains of concern.

  3. HydroViz: design and evaluation of a Web-based tool for improving hydrology education

    NASA Astrophysics Data System (ADS)

    Habib, E.; Ma, Y.; Williams, D.; Sharif, H. O.; Hossain, F.

    2012-10-01

    HydroViz is a Web-based, student-centered, educational tool designed to support active learning in the field of Engineering Hydrology. The design of HydroViz is guided by a learning model that is based on learning with data and simulations, using real-world natural hydrologic systems to convey theoretical concepts, and using Web-based technologies for dissemination of the hydrologic education developments. This model, while being used in a hydrologic education context, can be adapted in other engineering educational settings. HydroViz leverages the free Google Earth resources to enable presentation of geospatial data layers and embed them in web pages that have the same look and feel of Google Earth. These design features significantly facilitate the dissemination and adoption of HydroViz by any interested educational institutions regardless of their access to data or computer models. To facilitate classroom usage, HydroViz is populated with a set of course modules that can be used incrementally within different stages of an engineering hydrology curriculum. A pilot evaluation study was conducted to determine the effectiveness of the HydroViz tool in delivering its educational content, to examine the buy-in of the program by faculty and students, and to identify specific project components that need to be further pursued and improved. A total of 182 students from seven freshmen and senior-level undergraduate classes in three universities participated in the study. HydroViz was effective in facilitating students' learning and understanding of hydrologic concepts and increasing related skills. Students had positive perceptions of various features of HydroViz and they believe that HydroViz fits well in the curriculum. In general, HydroViz tend to be more effective with students in senior-level classes than students in freshmen classes. Lessons gained from this pilot study provide guidance for future adaptation and expansion studies to scale-up the application and utility of HydroViz and other similar systems into various hydrology and water-resource engineering curriculum settings. The paper presents a set of design principles that contribute to the development of other active hydrology educational systems.

  4. BIOMedical Search Engine Framework: Lightweight and customized implementation of domain-specific biomedical search engines.

    PubMed

    Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália

    2016-07-01

    Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations meaningful to that particular scope of research. Conversely, indirect concept associations, i.e. concepts related by other intermediary concepts, can be useful to integrate information from different studies and look into non-trivial relations. The BIOMedical Search Engine Framework supports the development of domain-specific search engines. The key strengths of the framework are modularity and extensibilityin terms of software design, the use of open-source consolidated Web technologies, and the ability to integrate any number of biomedical text mining tools and information resources. Currently, the Smart Drug Search keeps over 1,186,000 documents, containing more than 11,854,000 annotations for 77,200 different concepts. The Smart Drug Search is publicly accessible at http://sing.ei.uvigo.es/sds/. The BIOMedical Search Engine Framework is freely available for non-commercial use at https://github.com/agjacome/biomsef. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. ACHP | Federal Historic Preservation Web Sites

    Science.gov Websites

    Historic Preservation Web Sites Federal Historic Preservation Web Sites Historic American Buildings Survey /Historic American Engineering Record/Historic American Landscapes Survey lcweb2.loc.gov/ammem/hhhtml

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. Lynn Watney; John H. Doveton

    GEMINI (Geo-Engineering Modeling through Internet Informatics) is a public-domain web application focused on analysis and modeling of petroleum reservoirs and plays (http://www.kgs.ukans.edu/Gemini/index.html). GEMINI creates a virtual project by ''on-the-fly'' assembly and analysis of on-line data either from the Kansas Geological Survey or uploaded from the user. GEMINI's suite of geological and engineering web applications for reservoir analysis include: (1) petrofacies-based core and log modeling using an interactive relational rock catalog and log analysis modules; (2) a well profile module; (3) interactive cross sections to display ''marked'' wireline logs; (4) deterministic gridding and mapping of petrophysical data; (5) calculation and mappingmore » of layer volumetrics; (6) material balance calculations; (7) PVT calculator; (8) DST analyst, (9) automated hydrocarbon association navigator (KHAN) for database mining, and (10) tutorial and help functions. The Kansas Hydrocarbon Association Navigator (KHAN) utilizes petrophysical databases to estimate hydrocarbon pay or other constituent at a play- or field-scale. Databases analyzed and displayed include digital logs, core analysis and photos, DST, and production data. GEMINI accommodates distant collaborations using secure password protection and authorized access. Assembled data, analyses, charts, and maps can readily be moved to other applications. GEMINI's target audience includes small independents and consultants seeking to find, quantitatively characterize, and develop subtle and bypassed pays by leveraging the growing base of digital data resources. Participating companies involved in the testing and evaluation of GEMINI included Anadarko, BP, Conoco-Phillips, Lario, Mull, Murfin, and Pioneer Resources.« less

  7. [Improving vaccination social marketing by monitoring the web].

    PubMed

    Ferro, A; Bonanni, P; Castiglia, P; Montante, A; Colucci, M; Miotto, S; Siddu, A; Murrone, L; Baldo, V

    2014-01-01

    Immunisation is one of the most important and cost- effective interventions in Public Health because of their significant positive impact on population health.However, since Jenner's discovery there always been a lively debate between supporters and opponents of vaccination; Today the antivaccination movement spreads its message mostly on the web, disseminating inaccurate data through blogs and forums, increasing vaccine rejection.In this context, the Società Italiana di Igiene (SItI) created a web project in order to fight the misinformation on the web regarding vaccinations, through a series of information tools, including scientific articles, educational information, video and multimedia presentations The web portal (http://www.vaccinarsi.org) was published in May 2013 and now is already available over one hundred web pages related to vaccinations Recently a Forum, a periodic newsletter and a Twitter page have been created. There has been an average of 10,000 hits per month. Currently our users are mostly healthcare professionals. The visibility of the site is very good and it currently ranks first in the Google's search engine, taping the word "vaccinarsi" The results of the first four months of activity are extremely encouraging and show the importance of this project; furthermore the application for quality certification by independent international Organizations has been submitted.

  8. FindZebra: a search engine for rare diseases.

    PubMed

    Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole

    2013-06-01

    The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Information about liver transplantation on the World Wide Web.

    PubMed

    Hanif, F; Sivaprakasam, R; Butler, A; Huguet, E; Pettigrew, G J; Michael, E D A; Praseedom, R K; Jamieson, N V; Bradley, J A; Gibbs, P

    2006-09-01

    Orthotopic liver transplant (OLTx) has evolved to a successful surgical management for end-stage liver diseases. Awareness and information about OLTx is an important tool in assisting OLTx recipients and people supporting them, including non-transplant clinicians. The study aimed to investigate the nature and quality of liver transplant-related patient information on the World Wide Web. Four common search engines were used to explore the Internet by using the key words 'Liver transplant'. The URL (unique resource locator) of the top 50 returns was chosen as it was judged unlikely that the average user would search beyond the first 50 sites returned by a given search. Each Web site was assessed on the following categories: origin, language, accessibility and extent of the information. A weighted Information Score (IS) was created to assess the quality of clinical and educational value of each Web site and was scored independently by three transplant clinicians. The Internet search performed with the aid of the four search engines yielded a total of 2,255,244 Web sites. Of the 200 possible sites, only 58 Web sites were assessed because of repetition of the same Web sites and non-accessible links. The overall median weighted IS was 22 (IQR 1 - 42). Of the 58 Web sites analysed, 45 (77%) belonged to USA, six (10%) were European, and seven (12%) were from the rest of the world. The median weighted IS of publications originating from Europe and USA was 40 (IQR = 22 - 60) and 23 (IQR = 6 - 38), respectively. Although European Web sites produced a higher weighted IS [40 (IQR = 22 - 60)] as compared with the USA publications [23 (IQR = 6 - 38)], this was not statistically significant (p = 0.07). Web sites belonging to the academic institutions and the professional organizations scored significantly higher with a median weighted IS of 28 (IQR = 16 - 44) and 24(12 - 35), respectively, as compared with the commercial Web sites (median = 6 with IQR of 0 - 14, p = .001). There was an Intraclass Correlation Coefficient (ICC) of 0.89 and an associated 95% CI (0.83, 0.93) for the three observers on the 58 Web sites. The study highlights the need for a significant improvement in the information available on the World Wide Web about OLTx. It concludes that the educational material currently available on the World Wide Web about liver transplant is of poor quality and requires rigorous input from health care professionals. The authors suggest that clinicians should pay more attention to take the necessary steps to improve the standard of information available on their relevant Web sites and must take an active role in helping their patients find Web sites that provide the best and accurate information specifically applicable to the loco-regional circumstances.

  10. A review of the reporting of web searching to identify studies for Cochrane systematic reviews.

    PubMed

    Briscoe, Simon

    2018-03-01

    The literature searches that are used to identify studies for inclusion in a systematic review should be comprehensively reported. This ensures that the literature searches are transparent and reproducible, which is important for assessing the strengths and weaknesses of a systematic review and re-running the literature searches when conducting an update review. Web searching using search engines and the websites of topically relevant organisations is sometimes used as a supplementary literature search method. Previous research has shown that the reporting of web searching in systematic reviews often lacks important details and is thus not transparent or reproducible. Useful details to report about web searching include the name of the search engine or website, the URL, the date searched, the search strategy, and the number of results. This study reviews the reporting of web searching to identify studies for Cochrane systematic reviews published in the 6-month period August 2016 to January 2017 (n = 423). Of these reviews, 61 reviews reported using web searching using a search engine or website as a literature search method. In the majority of reviews, the reporting of web searching was found to lack essential detail for ensuring transparency and reproducibility, such as the search terms. Recommendations are made on how to improve the reporting of web searching in Cochrane systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Overview of Nuclear Physics Data: Databases, Web Applications and Teaching Tools

    NASA Astrophysics Data System (ADS)

    McCutchan, Elizabeth

    2017-01-01

    The mission of the United States Nuclear Data Program (USNDP) is to provide current, accurate, and authoritative data for use in pure and applied areas of nuclear science and engineering. This is accomplished by compiling, evaluating, and disseminating extensive datasets. Our main products include the Evaluated Nuclear Structure File (ENSDF) containing information on nuclear structure and decay properties and the Evaluated Nuclear Data File (ENDF) containing information on neutron-induced reactions. The National Nuclear Data Center (NNDC), through the website www.nndc.bnl.gov, provides web-based retrieval systems for these and many other databases. In addition, the NNDC hosts several on-line physics tools, useful for calculating various quantities relating to basic nuclear physics. In this talk, I will first introduce the quantities which are evaluated and recommended in our databases. I will then outline the searching capabilities which allow one to quickly and efficiently retrieve data. Finally, I will demonstrate how the database searches and web applications can provide effective teaching tools concerning the structure of nuclei and how they interact. Work supported by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886.

  12. Interactive Analysis of General Beam Configurations using Finite Element Methods and JavaScript

    NASA Astrophysics Data System (ADS)

    Hernandez, Christopher

    Advancements in computer technology have contributed to the widespread practice of modelling and solving engineering problems through the use of specialized software. The wide use of engineering software comes with the disadvantage to the user of costs from the required purchase of software licenses. The creation of accurate, trusted, and freely available applications capable of conducting meaningful analysis of engineering problems is a way to mitigate to the costs associated with every-day engineering computations. Writing applications in the JavaScript programming language allows the applications to run within any computer browser, without the need to install specialized software, since all internet browsers are equipped with virtual machines (VM) that allow the browsers to execute JavaScript code. The objective of this work is the development of an application that performs the analysis of a completely general beam through use of the finite element method. The app is written in JavaScript and embedded in a web page so it can be downloaded and executed by a user with an internet connection. This application allows the user to analyze any uniform or non-uniform beam, with any combination of applied forces, moments, distributed loads, and boundary conditions. Outputs for this application include lists the beam deformations and slopes, as well as lateral and slope deformation graphs, bending stress distributions, and shear and a moment diagrams. To validate the methodology of the GBeam finite element app, its results are verified using the results from obtained from two other established finite element solvers for fifteen separate test cases.

  13. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers

    PubMed Central

    Alsaleh, Mansour; Alarifi, Abdulrahman

    2016-01-01

    Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents. PMID:27855179

  14. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    PubMed

    Alsaleh, Mansour; Alarifi, Abdulrahman

    2016-01-01

    Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  15. Context-aware system design

    NASA Astrophysics Data System (ADS)

    Chan, Christine S.; Ostertag, Michael H.; Akyürek, Alper Sinan; Šimunić Rosing, Tajana

    2017-05-01

    The Internet of Things envisions a web-connected infrastructure of billions of sensors and actuation devices. However, the current state-of-the-art presents another reality: monolithic end-to-end applications tightly coupled to a limited set of sensors and actuators. Growing such applications with new devices or behaviors, or extending the existing infrastructure with new applications, involves redesign and redeployment. We instead propose a modular approach to these applications, breaking them into an equivalent set of functional units (context engines) whose input/output transformations are driven by general-purpose machine learning, demonstrating an improvement in compute redundancy and computational complexity with minimal impact on accuracy. In conjunction with formal data specifications, or ontologies, we can replace application-specific implementations with a composition of context engines that use common statistical learning to generate output, thus improving context reuse. We implement interconnected context-aware applications using our approach, extracting user context from sensors in both healthcare and grid applications. We compare our infrastructure to single-stage monolithic implementations with single-point communications between sensor nodes and the cloud servers, demonstrating a reduction in combined system energy by 22-45%, and multiplying the battery lifetime of power-constrained devices by at least 22x, with easy deployment across different architectures and devices.

  16. Crosstalk: The Journal of Defense Software Engineering. Volume 22, Number 2, February 2009

    DTIC Science & Technology

    2009-02-01

    IT Investment With Service-Oriented Architecture ( SOA ), Geoffrey Raines examines how an SOA offers federal senior leadership teams an incremental and...values, and is used by 30 million people. [1] Given budget constraints, an incre- mental approach seems to be required. A Path Forward SOA , as implemented...point of view, SOA offers several positive benefits. Language Neutral Integration Web-enabling applications with a com- mon browser interface became a

  17. Second Generation Weather Impacts Decision Aid Applications and Web Services Overview

    DTIC Science & Technology

    2013-07-01

    ABSTRACT Unclassified c . THIS PAGE Unclassified 19b. TELEPHONE NUMBER (Include area code) (575) 678-0634 Standard Form 298 (Rev. 8/98...Chesley, C . H.; Spillane, A. R.; Eure, S. L.; Shaw, P. J. Engineering Plan of the Integrated Weather Effects Decision Aids (IWEDA) Software Program...Planning Tool. Proceedings of the 1992 Battlefield Atmospherics Conference, 1992; pp. 501−509. 6. Chesley, C . H.; Johnson, J. S.; Maunz, W. G.; Spillane, A

  18. Active Visual SLAM with Exploration for Autonomous Underwater Navigation

    DTIC Science & Technology

    2012-01-01

    tourism. Reconstruction of Notre Dame de Paris (Snavely et al., 2006). (c) Web-scale landmark recognition engine (Zheng et al., 2009). eters for an...structures, such as Notre Dame Cathedral in Paris and the Great Wall of China (Figure 1.3(b)), using photographs compiled from the Internet. Given the...representation. Originally developed for text-based applications, expansion of this approach to images were found in Leung and Malik (2001), Sivic and Zisserman

  19. SOLE: Applying Semantics and Social Web to Support Technology Enhanced Learning in Software Engineering

    NASA Astrophysics Data System (ADS)

    Colomo-Palacios, Ricardo; Jiménez-López, Diego; García-Crespo, Ángel; Blanco-Iglesias, Borja

    eLearning educative processes are a challenge for educative institutions and education professionals. In an environment in which learning resources are being produced, catalogued and stored using innovative ways, SOLE provides a platform in which exam questions can be produced supported by Web 2.0 tools, catalogued and labeled via semantic web and stored and distributed using eLearning standards. This paper presents, SOLE, a social network of exam questions sharing particularized for Software Engineering domain, based on semantics and built using semantic web and eLearning standards, such as IMS Question and Test Interoperability specification 2.1.

  20. Syndromic surveillance models using Web data: the case of scarlet fever in the UK.

    PubMed

    Samaras, Loukas; García-Barriocanal, Elena; Sicilia, Miguel-Angel

    2012-03-01

    Recent research has shown the potential of Web queries as a source for syndromic surveillance, and existing studies show that these queries can be used as a basis for estimation and prediction of the development of a syndromic disease, such as influenza, using log linear (logit) statistical models. Two alternative models are applied to the relationship between cases and Web queries in this paper. We examine the applicability of using statistical methods to relate search engine queries with scarlet fever cases in the UK, taking advantage of tools to acquire the appropriate data from Google, and using an alternative statistical method based on gamma distributions. The results show that using logit models, the Pearson correlation factor between Web queries and the data obtained from the official agencies must be over 0.90, otherwise the prediction of the peak and the spread of the distributions gives significant deviations. In this paper, we describe the gamma distribution model and show that we can obtain better results in all cases using gamma transformations, and especially in those with a smaller correlation factor.

  1. The Web Resource Collaboration Center

    ERIC Educational Resources Information Center

    Dunlap, Joanna C.

    2004-01-01

    The Web Resource Collaboration Center (WRCC) is a web-based tool developed to help software engineers build their own web-based learning and performance support systems. Designed using various online communication and collaboration technologies, the WRCC enables people to: (1) build a learning and professional development resource that provides…

  2. Multimedia Web Searching Trends.

    ERIC Educational Resources Information Center

    Ozmutlu, Seda; Spink, Amanda; Ozmutlu, H. Cenk

    2002-01-01

    Examines and compares multimedia Web searching by Excite and FAST search engine users in 2001. Highlights include audio and video queries; time spent on searches; terms per query; ranking of the most frequently used terms; and differences in Web search behaviors of U.S. and European Web users. (Author/LRW)

  3. Detecting people of interest from internet data sources

    NASA Astrophysics Data System (ADS)

    Cardillo, Raymond A.; Salerno, John J.

    2006-04-01

    In previous papers, we have documented success in determining the key people of interest from a large corpus of real-world evidence. Our recent efforts focus on exploring additional domains and data sources. Internet data sources such as email, web pages, and news feeds make it easier to gather a large corpus of documents for various domains, but detecting people of interest in these sources introduces new challenges. Analyzing these massive sources magnifies entity resolution problems, and demands a storage management strategy that supports efficient algorithmic analysis and visualization techniques. This paper discusses the techniques we used in order to analyze the ENRON email repository, which are also applicable to analyzing web pages returned from our "Buddy" meta-search engine.

  4. Web-based description of the space radiation environment using the Bethe-Bloch model

    NASA Astrophysics Data System (ADS)

    Cazzola, Emanuele; Calders, Stijn; Lapenta, Giovanni

    2016-01-01

    Space weather is a rapidly growing area of research not only in scientific and engineering applications but also in physics education and in the interest of the public. We focus especially on space radiation and its impact on space exploration. The topic is highly interdisciplinary, bringing together fundamental concepts of nuclear physics with aspects of radiation protection and space science. We give a new approach to presenting the topic by developing a web-based application that combines some of the fundamental concepts from these two fields into a single tool that can be used in the context of advanced secondary or undergraduate university education. We present DREADCode, an outreach or teaching tool to rapidly assess the current conditions of the radiation field in space. DREADCode uses the available data feeds from a number of ongoing space missions (ACE, GOES-13, GOES-15) to produce a first order approximation of the radiation dose an astronaut would receive during a mission of exploration in deep space (i.e. far from the Earth’s shielding magnetic field and from the radiation belts). DREADCode is based on an easy-to-use GUI interface available online from the European Space Weather Portal (www.spaceweather.eu/dreadcode). The core of the radiation transport computation to produce the radiation dose from the observed fluence of radiation observed by the spacecraft fleet considered is based on a relatively simple approximation: the Bethe-Bloch equation. DREADCode also assumes a simplified geometry and material configuration for the shields used to compute the dose. The approach is approximate and sacrifices some important physics on the altar of rapid execution time, which allows a real-time operation scenario. There is no intention here to produce an operational tool for use in space science and engineering. Rather, we present an educational tool at undergraduate level that uses modern web-based and programming methods to learn some of the most important concepts in the application of radiation protection to space weather problems.

  5. Web-Based Simulation Games for the Integration of Engineering and Business Fundamentals

    ERIC Educational Resources Information Center

    Calfa, Bruno; Banholzer, William; Alger, Monty; Doherty, Michael

    2017-01-01

    This paper describes a web-based suite of simulation games that have the purpose to enhance the chemical engineering curriculum with business-oriented decisions. Two simulation cases are discussed whose teaching topics include closing material and energy balances, importance of recycle streams, price-volume relationship in a dynamic market, impact…

  6. Search Engines: A Primer on Finding Information on the World Wide Web.

    ERIC Educational Resources Information Center

    Maddux, Cleborne

    1996-01-01

    Presents an annotated list of several World Wide Web search engines, including Yahoo, Infoseek, Alta Vista, Magellan, Lycos, Webcrawler, Excite, Deja News, and the LISZT Directory of discussion groups. Uniform Resource Locators (URLs) are included. Discussion assesses performance and describes rules and syntax for refining or limiting a search.…

  7. The Advancement in Using Remote Laboratories in Electrical Engineering Education: A Review

    ERIC Educational Resources Information Center

    Almarshoud, A. F.

    2011-01-01

    The rapid development in Internet technology and its big popularity has led some universities around the world to incorporate web-based learning in some of their programmes. The present paper introduces a comprehensive survey of the publications about using remote laboratories in electrical engineering education. Remote laboratories are web-based,…

  8. Use of an Academic Library Web Site Search Engine.

    ERIC Educational Resources Information Center

    Fagan, Jody Condit

    2002-01-01

    Describes an analysis of the search engine logs of Southern Illinois University, Carbondale's library to determine how patrons used the site search. Discusses results that showed patrons did not understand the function of the search and explains improvements that were made in the Web site and in online reference services. (Author/LRW)

  9. MetaSpider: Meta-Searching and Categorization on the Web.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Fan, Haiyan; Chau, Michael; Zeng, Daniel

    2001-01-01

    Discusses the difficulty of locating relevant information on the Web and studies two approaches to addressing the low precision and poor presentation of search results: meta-search and document categorization. Introduces MetaSpider, a meta-search engine, and presents results of a user evaluation study that compared three search engines.…

  10. GeoSearcher: Location-Based Ranking of Search Engine Results.

    ERIC Educational Resources Information Center

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  11. An Architecture for Automated Fire Detection Early Warning System Based on Geoprocessing Service Composition

    NASA Astrophysics Data System (ADS)

    Samadzadegan, F.; Saber, M.; Zahmatkesh, H.; Joze Ghazi Khanlou, H.

    2013-09-01

    Rapidly discovering, sharing, integrating and applying geospatial information are key issues in the domain of emergency response and disaster management. Due to the distributed nature of data and processing resources in disaster management, utilizing a Service Oriented Architecture (SOA) to take advantages of workflow of services provides an efficient, flexible and reliable implementations to encounter different hazardous situation. The implementation specification of the Web Processing Service (WPS) has guided geospatial data processing in a Service Oriented Architecture (SOA) platform to become a widely accepted solution for processing remotely sensed data on the web. This paper presents an architecture design based on OGC web services for automated workflow for acquisition, processing remotely sensed data, detecting fire and sending notifications to the authorities. A basic architecture and its building blocks for an automated fire detection early warning system are represented using web-based processing of remote sensing imageries utilizing MODIS data. A composition of WPS processes is proposed as a WPS service to extract fire events from MODIS data. Subsequently, the paper highlights the role of WPS as a middleware interface in the domain of geospatial web service technology that can be used to invoke a large variety of geoprocessing operations and chaining of other web services as an engine of composition. The applicability of proposed architecture by a real world fire event detection and notification use case is evaluated. A GeoPortal client with open-source software was developed to manage data, metadata, processes, and authorities. Investigating feasibility and benefits of proposed framework shows that this framework can be used for wide area of geospatial applications specially disaster management and environmental monitoring.

  12. How Public Is the Web?: Robots, Access, and Scholarly Communication.

    ERIC Educational Resources Information Center

    Snyder, Herbert; Rosenbaum, Howard

    1998-01-01

    Examines the use of Robot Exclusion Protocol (REP) to restrict the access of search engine robots to 10 major United States university Web sites. An analysis of Web site searching and interviews with Web server administrators shows that the decision to use this procedure is largely technical and is typically made by the Web server administrator.…

  13. Bibliometrics of the World Wide Web: An Exploratory Analysis of the Intellectual Structure of Cyberspace.

    ERIC Educational Resources Information Center

    Larson, Ray R.

    1996-01-01

    Examines the bibliometrics of the World Wide Web based on analysis of Web pages collected by the Inktomi "Web Crawler" and on the use of the DEC AltaVista search engine for cocitation analysis of a set of Earth Science related Web sites. Looks at the statistical characteristics of Web documents and their hypertext links, and the…

  14. Promoting Your Web Site.

    ERIC Educational Resources Information Center

    Raeder, Aggi

    1997-01-01

    Discussion of ways to promote sites on the World Wide Web focuses on how search engines work and how they retrieve and identify sites. Appropriate Web links for submitting new sites and for Internet marketing are included. (LRW)

  15. The Arctic Research Mapping Application (ARMAP): a Geoportal for Visualizing Project-level Information About U.S. Funded Research in the Arctic

    NASA Astrophysics Data System (ADS)

    Kassin, A.; Cody, R. P.; Barba, M.; Gaylord, A. G.; Manley, W. F.; Score, R.; Escarzaga, S. M.; Tweedie, C. E.

    2016-12-01

    The Arctic Research Mapping Application (ARMAP; http://armap.org/) is a suite of online applications and data services that support Arctic science by providing project tracking information (who's doing what, when and where in the region) for United States Government funded projects. In collaboration with 17 research agencies, project locations are displayed in a visually enhanced web mapping application. Key information about each project is presented along with links to web pages that provide additional information, including links to data where possible. The latest ARMAP iteration has i) reworked the search user interface (UI) to enable multiple filters to be applied in user-driven queries and ii) implemented ArcGIS Javascript API 4.0 to allow for deployment of 3D maps directly into a users web-browser and enhanced customization of popups. Module additions include i) a dashboard UI powered by a back-end Apache SOLR engine to visualize data in intuitive and interactive charts; and ii) a printing module that allows users to customize maps and export these to different formats (pdf, ppt, gif and jpg). New reference layers and an updated ship tracks layer have also been added. These improvements have been made to improve discoverability, enhance logistics coordination, identify geographic gaps in research/observation effort, and foster enhanced collaboration among the research community. Additionally, ARMAP can be used to demonstrate past, present, and future research effort supported by the U.S. Government.

  16. An Open Source Tool to Test Interoperability

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.

    2012-12-01

    Scientists interact with information at various levels from gathering of the raw observed data to accessing portrayed processed quality control data. Geoinformatics tools help scientist on the acquisition, storage, processing, dissemination and presentation of geospatial information. Most of the interactions occur in a distributed environment between software components that take the role of either client or server. The communication between components includes protocols, encodings of messages and managing of errors. Testing of these communication components is important to guarantee proper implementation of standards. The communication between clients and servers can be adhoc or follow standards. By following standards interoperability between components increase while reducing the time of developing new software. The Open Geospatial Consortium (OGC), not only coordinates the development of standards but also, within the Compliance Testing Program (CITE), provides a testing infrastructure to test clients and servers. The OGC Web-based Test Engine Facility, based on TEAM Engine, allows developers to test Web services and clients for correct implementation of OGC standards. TEAM Engine is a JAVA open source facility, available at Sourceforge that can be run via command line, deployed in a web servlet container or integrated in developer's environment via MAVEN. The TEAM Engine uses the Compliance Test Language (CTL) and TestNG to test HTTP requests, SOAP services and XML instances against Schemas and Schematron based assertions of any type of web service, not only OGC services. For example, the OGC Web Feature Service (WFS) 1.0.0 test has more than 400 test assertions. Some of these assertions includes conformance of HTTP responses, conformance of GML-encoded data; proper values for elements and attributes in the XML; and, correct error responses. This presentation will provide an overview of TEAM Engine, introduction of how to test via the OGC Testing web site and description of performing local tests. It will also provide information about how to participate in the open source code development of TEAM Engine.

  17. Running Rings Around the Web.

    ERIC Educational Resources Information Center

    McDermott, Irene E.

    1999-01-01

    Describes the development and current status of WebRing, a service that links related Web sites into a central hub. Discusses it as a viable alternative to other search engines and examines issues of free speech, use by the business sector, and implications for WebRing after its purchase by Yahoo! (LRW)

  18. Risk assessment for job burnout with a mobile health web application using questionnaire data: a proof of concept study.

    PubMed

    von Känel, Roland; van Nuffel, Marc; Fuchs, Walther J

    2016-01-01

    Job burnout has become a rampant epidemic in working societies, causing high productivity loss and healthcare costs. An easy accessible tool to detect clinically relevant risk may bear the potential to timely avert the dire sequelae of burnout. As a start, we performed a proof of concept study to test the utilization of a mobile health web application for a free and anonymous burnout risk assessment with established questionnaires. We designed a client-side javascript web application for users who filled out demographic and psychometric data forms over the internet. Users were recruited through social media, back links from hospital websites, and search engine optimization. Similar to population-based studies, we used the Maslach Burnout Inventory-General Survey (MBI-GS) to calculate a burnout risk index (BRIX). As additional mental health burden indices, users filled out the Perceived Stress Scale, Insomina Severity Index, and Profile of Mood States. Within six months, the MBI-GS was completed by 11,311 users (median age 33 years, 85 % women) of whom 20.0 % had no clinically relevant burnout risk, 54.7 % had mild-to-moderate risk, and 25.3 % had high risk. In the 2947 users completing all questionnaires, female sex ( B  = -0.03), cohabiting ( B  = -0.03), negative affect ( B  = 0.46), positive affect ( B  = -0.20), perceived stress ( B  = 0.18), and insomnia symptoms ( B  = 0.04) explained 56.2 % of the variance in the continuously scaled BRIX. The reliability was good to excellent for all psychometric scales. The weighting of the BRIX with mental health burden indices primarily modified the risk in users with mild-to-moderate burnout risk. A low-threshold web application can reliably assess the risk of job burnout. As the bulk of users had clinically relevant burnout scores, a web application may be useful to target employees at risk. The clinical value of the BRIX and its modification with coexistent/absent mental health burden awaits evaluation with work and health outcomes.

  19. Visualization and Phospholipid Identification (VaLID): online integrated search engine capable of identifying and visualizing glycerophospholipids with given mass

    PubMed Central

    Figeys, Daniel; Fai, Stephen; Bennett, Steffany A. L.

    2013-01-01

    Motivation: Establishing phospholipid identities in large lipidomic datasets is a labour-intensive process. Where genomics and proteomics capitalize on sequence-based signatures, glycerophospholipids lack easily definable molecular fingerprints. Carbon chain length, degree of unsaturation, linkage, and polar head group identity must be calculated from mass to charge (m/z) ratios under defined mass spectrometry (MS) conditions. Given increasing MS sensitivity, many m/z values are not represented in existing prediction engines. To address this need, Visualization and Phospholipid Identification is a web-based application that returns all theoretically possible phospholipids for any m/z value and MS condition. Visualization algorithms produce multiple chemical structure files for each species. Curated lipids detected by the Canadian Institutes of Health Research Training Program in Neurodegenerative Lipidomics are provided as high-resolution structures. Availability: VaLID is available through the Canadian Institutes of Health Research Training Program in Neurodegenerative Lipidomics resources web site at https://www.med.uottawa.ca/lipidomics/resources.html. Contacts: lipawrd@uottawa.ca Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:23162086

  20. OntoPop: An Ontology Population System for the Semantic Web

    NASA Astrophysics Data System (ADS)

    Thongkrau, Theerayut; Lalitrojwong, Pattarachai

    The development of ontology at the instance level requires the extraction of the terms defining the instances from various data sources. These instances then are linked to the concepts of the ontology, and relationships are created between these instances for the next step. However, before establishing links among data, ontology engineers must classify terms or instances from a web document into an ontology concept. The tool for help ontology engineer in this task is called ontology population. The present research is not suitable for ontology development applications, such as long time processing or analyzing large or noisy data sets. OntoPop system introduces a methodology to solve these problems, which comprises two parts. First, we select meaningful features from syntactic relations, which can produce more significant features than any other method. Second, we differentiate feature meaning and reduce noise based on latent semantic analysis. Experimental evaluation demonstrates that the OntoPop works well, significantly out-performing the accuracy of 49.64%, a learning accuracy of 76.93%, and executes time of 5.46 second/instance.

  1. Progress on ultrasonic flaw sizing in turbine-engine rotor components: bore and web geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, J.H.; Gray, T.A.; Thompson, R.B.

    1983-01-01

    The application of generic flaw-sizing techniques to specific components generally involves difficulties associated with geometrical complexity and simplifications arising from a knowledge of the expected flaw distribution. This paper is concerned with the case of ultrasonic flaw sizing in turbine-engine rotor components. The sizing of flat penny-shaped cracks in the web geometry discussed and new crack-sizing algorithms based on the Born and Kirchhoff approximations are introduced. Additionally, we propose a simple method for finding the size of a flat, penny-shaped crack given only the magnitude of the scattering amplitude. The bore geometry is discussed with primary emphasis on the cylindricalmore » focusing of the incident beam. Important questions which are addressed include the effects of diffraction and the position of the flaw with respect to the focal line. The appropriate deconvolution procedures to account for these effects are introduced. Generic features of the theory are compared with experiment. Finally, the effects of focused transducers on the Born inversion algorithm are discussed.« less

  2. Revel8or: Model Driven Capacity Planning Tool Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Liming; Liu, Yan; Bui, Ngoc B.

    2007-05-31

    Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less

  3. Colour and Optical Properties of Materials: An Exploration of the Relationship Between Light, the Optical Properties of Materials and Colour

    NASA Astrophysics Data System (ADS)

    Tilley, Richard J. D.

    2003-05-01

    Colour is an important and integral part of everyday life, and an understanding and knowledge of the scientific principles behind colour, with its many applications and uses, is becoming increasingly important to a wide range of academic disciplines, from physical, medical and biological sciences through to the arts. Colour and the Optical Properties of Materials carefully introduces the science behind the subject, along with many modern and cutting-edge applications, chose to appeal to today's students. For science students, it provides a broad introduction to the subject and the many applications of colour. To more applied students, such as engineering and arts students, it provides the essential scientific background to colour and the many applications. Features: * Introduces the science behind the subject whilst closely connecting it to modern applications, such as colour displays, optical amplifiers and colour centre lasers * Richly illustrated with full-colour plates * Includes many worked examples, along with problems and exercises at the end of each chapter and selected answers at the back of the book * A Web site, including additional problems and full solutions to all the problems, which may be accessed at: www.cardiff.ac.uk/uwcc/engin/staff/rdjt/colour Written for students taking an introductory course in colour in a wide range of disciplines such as physics, chemistry, engineering, materials science, computer science, design, photography, architecture and textiles.

  4. Research on the Relationships between Chinese Journal Impact Factors and External Web Link Counts and Web Impact Factors

    ERIC Educational Resources Information Center

    An, Lu; Qiu, Junping

    2004-01-01

    Journal impact factors (JIFs) as determined by the Institute for Scientific and Technological Information of China (ISTIC) of forty-two Chinese engineering journals were compared with external Web link counts, obtained from Lycos, and Web Impact Factors (WIFs) of corresponding journal Web sites to determine if any significant correlation existed…

  5. Detecting the Norovirus Season in Sweden Using Search Engine Data – Meeting the Needs of Hospital Infection Control Teams

    PubMed Central

    Edelstein, Michael; Wallensten, Anders; Zetterqvist, Inga; Hulth, Anette

    2014-01-01

    Norovirus outbreaks severely disrupt healthcare systems. We evaluated whether Websök, an internet-based surveillance system using search engine data, improved norovirus surveillance and response in Sweden. We compared Websök users' characteristics with the general population, cross-correlated weekly Websök searches with laboratory notifications between 2006 and 2013, compared the time Websök and laboratory data crossed the epidemic threshold and surveyed infection control teams about their perception and use of Websök. Users of Websök were not representative of the general population. Websök correlated with laboratory data (b = 0.88-0.89) and gave an earlier signal to the onset of the norovirus season compared with laboratory-based surveillance. 17/21 (81%) infection control teams answered the survey, of which 11 (65%) believed Websök could help with infection control plans. Websök is a low-resource, easily replicable system that detects the norovirus season as reliably as laboratory data, but earlier. Using Websök in routine surveillance can help infection control teams prepare for the yearly norovirus season. PMID:24955857

  6. Detecting the norovirus season in Sweden using search engine data--meeting the needs of hospital infection control teams.

    PubMed

    Edelstein, Michael; Wallensten, Anders; Zetterqvist, Inga; Hulth, Anette

    2014-01-01

    Norovirus outbreaks severely disrupt healthcare systems. We evaluated whether Websök, an internet-based surveillance system using search engine data, improved norovirus surveillance and response in Sweden. We compared Websök users' characteristics with the general population, cross-correlated weekly Websök searches with laboratory notifications between 2006 and 2013, compared the time Websök and laboratory data crossed the epidemic threshold and surveyed infection control teams about their perception and use of Websök. Users of Websök were not representative of the general population. Websök correlated with laboratory data (b = 0.88-0.89) and gave an earlier signal to the onset of the norovirus season compared with laboratory-based surveillance. 17/21 (81%) infection control teams answered the survey, of which 11 (65%) believed Websök could help with infection control plans. Websök is a low-resource, easily replicable system that detects the norovirus season as reliably as laboratory data, but earlier. Using Websök in routine surveillance can help infection control teams prepare for the yearly norovirus season.

  7. Effectiveness of off-line and web-based promotion of health information web sites.

    PubMed

    Jones, Craig E; Pinnock, Carole B

    2002-01-01

    The relative effectiveness of off-line and web-based promotional activities in increasing the use of health information web sites by target audiences were compared. Visitor sessions were classified according to their method of arrival at the site (referral) as external web site, search engine, or "no referrer" (i.e., visitor arriving at the site by inputting URL or using bookmarks). The number of Australian visitor sessions correlated with no referrer referrals but not web site or search-engine referrals. Results showed that the targeted consumer group is more likely to access the web site as a result of off-line promotional activities. The properties of target audiences likely to influence the effectiveness of off-line versus on-line promotional strategies include the size of the Internet using population of the target audience, their proficiency in the use of the Internet, and the increase in effectiveness of off-line promotional activities when applied to locally defined target audiences.

  8. Spatial Visualization Learning in Engineering: Traditional Methods vs. a Web-Based Tool

    ERIC Educational Resources Information Center

    Pedrosa, Carlos Melgosa; Barbero, Basilio Ramos; Miguel, Arturo Román

    2014-01-01

    This study compares an interactive learning manager for graphic engineering to develop spatial vision (ILMAGE_SV) to traditional methods. ILMAGE_SV is an asynchronous web-based learning tool that allows the manipulation of objects with a 3D viewer, self-evaluation, and continuous assessment. In addition, student learning may be monitored, which…

  9. FloorspaceJS - A New, Open Source, Web-Based Geometry Editor for Building Energy Modeling (BEM): Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macumber, Daniel L; Horowitz, Scott G; Schott, Marjorie

    Across most industries, desktop applications are being rapidly migrated to web applications for a variety of reasons. Web applications are inherently cross platform, mobile, and easier to distribute than desktop applications. Fueling this trend are a wide range of free, open source libraries and frameworks that make it incredibly easy to develop powerful web applications. The building energy modeling community is just beginning to pick up on these larger trends, with a small but growing number of building energy modeling applications starting on or moving to the web. This paper presents a new, open source, web based geometry editor formore » Building Energy Modeling (BEM). The editor is written completely in JavaScript and runs in a modern web browser. The editor works on a custom JSON file format and is designed to be integrated into a variety of web and desktop applications. The web based editor is available to use as a standalone web application at: https://nrel.github.io/openstudio-geometry-editor/. An example integration is demonstrated with the OpenStudio desktop application. Finally, the editor can be easily integrated with a wide range of possible building energy modeling web applications.« less

  10. Userscripts for the life sciences.

    PubMed

    Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J

    2007-12-21

    The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity.

  11. Userscripts for the Life Sciences

    PubMed Central

    Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J

    2007-01-01

    Background The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Results Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. Conclusion This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity. PMID:18154664

  12. Noesis: Ontology based Scoped Search Engine and Resource Aggregator for Atmospheric Science

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Movva, S.; Li, X.; Cherukuri, P.; Graves, S.

    2006-12-01

    The goal for search engines is to return results that are both accurate and complete. The search engines should find only what you really want and find everything you really want. Search engines (even meta search engines) lack semantics. The basis for search is simply based on string matching between the user's query term and the resource database and the semantics associated with the search string is not captured. For example, if an atmospheric scientist is searching for "pressure" related web resources, most search engines return inaccurate results such as web resources related to blood pressure. In this presentation Noesis, which is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities will be described. Noesis uses domain ontologies to help the user scope the search query to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. Noesis also serves as a resource aggregator. It categorizes the search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user.

  13. A framework of medical equipment management system for in-house clinical engineering department.

    PubMed

    Chien, Chia-Hung; Huang, Yi-You; Chong, Fok-Ching

    2010-01-01

    Medical equipment management is an important issue for safety and cost in modern hospital operation. In addition, the use of an efficient information system effectively promotes the managing performance. In this study, we designed a framework of medical equipment management system used for in-house clinical engineering department. The system was web-based, and it integrated clinical engineering and hospital information system components. Through related information application, it efficiently improved the operation management of medical devices immediately and continuously. This system has run in the National Taiwan University Hospital. The results showed only few examples in the error analysis of medical equipment by the maintenance sub-system. The information can be used to improve work quality, to reduce the maintenance cost, and to promote the safety of medical device used in patients and clinical staffs.

  14. Web application for automatic prediction of gene translation elongation efficiency.

    PubMed

    Sokolov, Vladimir; Zuraev, Bulat; Lashin, Sergei; Matushkin, Yury

    2015-09-03

    Expression efficiency is one of the major characteristics describing genes in various modern investigations. Expression efficiency of genes is regulated at various stages: transcription, translation, posttranslational protein modification and others. In this study, a special EloE (Elongation Efficiency) web application is described. The EloE sorts the organism's genes in a descend order on their theoretical rate of the elongation stage of translation based on the analysis of their nucleotide sequences. Obtained theoretical data have a significant correlation with available experimental data of gene expression in various organisms. In addition, the program identifies preferential codons in organism's genes and defines distribution of potential secondary structures energy in 5´ and 3´ regions of mRNA. The EloE can be useful in preliminary estimation of translation elongation efficiency for genes for which experimental data are not available yet. Some results can be used, for instance, in other programs modeling artificial genetic structures in genetically engineered experiments.

  15. Recommendations for Benchmarking Web Site Usage among Academic Libraries.

    ERIC Educational Resources Information Center

    Hightower, Christy; Sih, Julie; Tilghman, Adam

    1998-01-01

    To help library directors and Web developers create a benchmarking program to compare statistics of academic Web sites, the authors analyzed the Web server log files of 14 university science and engineering libraries. Recommends a centralized voluntary reporting structure coordinated by the Association of Research Libraries (ARL) and a method for…

  16. Cognitive and Task Influences on Web Searching Behavior.

    ERIC Educational Resources Information Center

    Kim, Kyung-Sun; Allen, Bryce

    2002-01-01

    Describes results from two independent investigations of college students that were conducted to study the impact of differences in users' cognition and search tasks on Web search activities and outcomes. Topics include cognitive style; problem-solving; and implications for the design and use of the Web and Web search engines. (Author/LRW)

  17. Graph Structure in Three National Academic Webs: Power Laws with Anomalies.

    ERIC Educational Resources Information Center

    Thelwall, Mike; Wilkinson, David

    2003-01-01

    Explains how the Web can be modeled as a mathematical graph and analyzes the graph structures of three national university publicly indexable Web sites from Australia, New Zealand, and the United Kingdom. Topics include commercial search engines and academic Web link research; method-analysis environment and data sets; and power laws. (LRW)

  18. MedlinePlus Connect: How it Works

    MedlinePlus

    ... it looks depends on how it is implemented. Web Application The Web application returns a formatted response ... for more examples of Web Application response pages. Web Service The MedlinePlus Connect REST-based Web service ...

  19. Planetary Data Systems (PDS) Imaging Node Atlas II

    NASA Technical Reports Server (NTRS)

    Stanboli, Alice; McAuley, James M.

    2013-01-01

    The Planetary Image Atlas (PIA) is a Rich Internet Application (RIA) that serves planetary imaging data to the science community and the general public. PIA also utilizes the USGS Unified Planetary Coordinate system (UPC) and the on-Mars map server. The Atlas was designed to provide the ability to search and filter through greater than 8 million planetary image files. This software is a three-tier Web application that contains a search engine backend (MySQL, JAVA), Web service interface (SOAP) between server and client, and a GWT Google Maps API client front end. This application allows for the search, retrieval, and download of planetary images and associated meta-data from the following missions: 2001 Mars Odyssey, Cassini, Galileo, LCROSS, Lunar Reconnaissance Orbiter, Mars Exploration Rover, Mars Express, Magellan, Mars Global Surveyor, Mars Pathfinder, Mars Reconnaissance Orbiter, MESSENGER, Phoe nix, Viking Lander, Viking Orbiter, and Voyager. The Atlas utilizes the UPC to translate mission-specific coordinate systems into a unified coordinate system, allowing the end user to query across missions of similar targets. If desired, the end user can also use a mission-specific view of the Atlas. The mission-specific views rely on the same code base. This application is a major improvement over the initial version of the Planetary Image Atlas. It is a multi-mission search engine. This tool includes both basic and advanced search capabilities, providing a product search tool to interrogate the collection of planetary images. This tool lets the end user query information about each image, and ignores the data that the user has no interest in. Users can reduce the number of images to look at by defining an area of interest with latitude and longitude ranges.

  20. Climate Engine - Monitoring Drought with Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Hegewisch, K.; Daudert, B.; Morton, C.; McEvoy, D.; Huntington, J. L.; Abatzoglou, J. T.

    2016-12-01

    Drought has adverse effects on society through reduced water availability and agricultural production and increased wildfire risk. An abundance of remotely sensed imagery and climate data are being collected in near-real time that can provide place-based monitoring and early warning of drought and related hazards. However, in an era of increasing wealth of earth observations, tools that quickly access, compute, and visualize archives, and provide answers at relevant scales to better inform decision-making are lacking. We have developed ClimateEngine.org, a web application that uses Google's Earth Engine platform to enable users to quickly compute and visualize real-time observations. A suite of drought indices allow us to monitor and track drought from local (30-meters) to regional scales and contextualize current droughts within the historical record. Climate Engine is currently being used by U.S. federal agencies and researchers to develop baseline conditions and impact assessments related to agricultural, ecological, and hydrological drought. Climate Engine is also working with the Famine Early Warning Systems Network (FEWS NET) to expedite monitoring agricultural drought over broad areas at risk of food insecurity globally.

  1. Publications - PDF 99-24D | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Alaska's Mineral Industry Reports AKGeology.info Rare Earth Elements WebGeochem Engineering Geology Alaska ; Engineering; Engineering Geologic Map; Engineering Geology; Geologic Map; Geology; Land Subsidence; Landslide

  2. [Biomedical information on the internet using search engines. A one-year trial].

    PubMed

    Corrao, Salvatore; Leone, Francesco; Arnone, Sabrina

    2004-01-01

    The internet is a communication medium and content distributor that provide information in the general sense but it could be of great utility regarding as the search and retrieval of biomedical information. Search engines represent a great deal to rapidly find information on the net. However, we do not know whether general search engines and meta-search ones are reliable in order to find useful and validated biomedical information. The aim of our study was to verify the reproducibility of a search by key-words (pediatric or evidence) using 9 international search engines and 1 meta-search engine at the baseline and after a one year period. We analysed the first 20 citations as output of each searching. We evaluated the formal quality of Web-sites and their domain extensions. Moreover, we compared the output of each search at the start of this study and after a one year period and we considered as a criterion of reliability the number of Web-sites cited again. We found some interesting results that are reported throughout the text. Our findings point out an extreme dynamicity of the information on the Web and, for this reason, we advice a great caution when someone want to use search and meta-search engines as a tool for searching and retrieve reliable biomedical information. On the other hand, some search and meta-search engines could be very useful as a first step searching for defining better a search and, moreover, for finding institutional Web-sites too. This paper allows to know a more conscious approach to the internet biomedical information universe.

  3. WebDMS: A Web-Based Data Management System for Environmental Data

    NASA Astrophysics Data System (ADS)

    Ekstrand, A. L.; Haderman, M.; Chan, A.; Dye, T.; White, J. E.; Parajon, G.

    2015-12-01

    DMS is an environmental Data Management System to manage, quality-control (QC), summarize, document chain-of-custody, and disseminate data from networks ranging in size from a few sites to thousands of sites, instruments, and sensors. The server-client desktop version of DMS is used by local and regional air quality agencies (including the Bay Area Air Quality Management District, the South Coast Air Quality Management District, and the California Air Resources Board), the EPA's AirNow Program, and the EPA's AirNow-International (AirNow-I) program, which offers countries the ability to run an AirNow-like system. As AirNow's core data processing engine, DMS ingests, QCs, and stores real-time data from over 30,000 active sensors at over 5,280 air quality and meteorological sites from over 130 air quality agencies across the United States. As part of the AirNow-I program, several instances of DMS are deployed in China, Mexico, and Taiwan. The U.S. Department of State's StateAir Program also uses DMS for five regions in China and plans to expand to other countries in the future. Recent development has begun to migrate DMS from an onsite desktop application to WebDMS, a web-based application designed to take advantage of cloud hosting and computing services to increase scalability and lower costs. WebDMS will continue to provide easy-to-use data analysis tools, such as time-series graphs, scatterplots, and wind- or pollution-rose diagrams, as well as allowing data to be exported to external systems such as the EPA's Air Quality System (AQS). WebDMS will also provide new GIS analysis features and a suite of web services through a RESTful web API. These changes will better meet air agency needs and allow for broader national and international use (for example, by the AirNow-I partners). We will talk about the challenges and advantages of migrating DMS to the web, modernizing the DMS user interface, and making it more cost-effective to enhance and maintain over time.

  4. Knowledge Discovery, Integration and Communication for Extreme Weather and Flood Resilience Using Artificial Intelligence: Flood AI Alpha

    NASA Astrophysics Data System (ADS)

    Demir, I.; Sermet, M. Y.

    2016-12-01

    Nobody is immune from extreme events or natural hazards that can lead to large-scale consequences for the nation and public. One of the solutions to reduce the impacts of extreme events is to invest in improving resilience with the ability to better prepare, plan, recover, and adapt to disasters. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This abstracts presents our project on developing a resilience framework for flooding to improve societal preparedness with objectives; (a) develop a generalized ontology for extreme events with primary focus on flooding; (b) develop a knowledge engine with voice recognition, artificial intelligence, natural language processing, and inference engine. The knowledge engine will utilize the flood ontology and concepts to connect user input to relevant knowledge discovery outputs on flooding; (c) develop a data acquisition and processing framework from existing environmental observations, forecast models, and social networks. The system will utilize the framework, capabilities and user base of the Iowa Flood Information System (IFIS) to populate and test the system; (d) develop a communication framework to support user interaction and delivery of information to users. The interaction and delivery channels will include voice and text input via web-based system (e.g. IFIS), agent-based bots (e.g. Microsoft Skype, Facebook Messenger), smartphone and augmented reality applications (e.g. smart assistant), and automated web workflows (e.g. IFTTT, CloudWork) to open the knowledge discovery for flooding to thousands of community extensible web workflows.

  5. HydroViz: evaluation of a web-based tool for improving hydrology education

    NASA Astrophysics Data System (ADS)

    Habib, E.; Ma, Y.; Williams, D.; Sharif, H.; Hossain, F.

    2012-02-01

    HydroViz is a web-based, student-centered, highly visual educational tool designed to support active learning in the field of Engineering Hydrology. The development of HydroViz is informed by recent advances in hydrologic data, numerical simulations, visualization and web-based technologies. An evaluation study was conducted to determine the effectiveness of HydroViz, to examine the buy-in of the program, and to identify project components that need to be improved. A total of 182 students from seven freshmen and junior-/senior-level undergraduate classes in three universities participated in the study over the course of two semesters (spring 2010 and fall 2010). Data sources included homework assignments, online surveys, and informal interviews with students. Descriptive statistics were calculated for homework and the survey. Qualitative analysis of students' comments and informal interview notes were also conducted to identify ideas and patterns. HydroViz was effective in facilitating students' learning and understanding of hydrologic concepts and increasing related skills. Students had positive perceptions of various features of HydroViz and they believe that HydroViz fits well in the curriculum. The experience with HydroViz was somewhat effective in raising freshmen civil engineering students' interest in hydrology. In general, HydroViz tend to be more effective with students in junior- or senior-level classes than students in freshmen classes. There does not seem to be obvious differences between different universities. Students identified some issues that can be addressed to improve HydroViz. Future adaptation and expansion studies are under planning to scale-up the application and utility of HydroViz into various hydrology and water-resource engineering curriculum settings.

  6. Introduction to Chemical Engineering Reactor Analysis: A Web-Based Reactor Design Game

    ERIC Educational Resources Information Center

    Orbey, Nese; Clay, Molly; Russell, T.W. Fraser

    2014-01-01

    An approach to explain chemical engineering through a Web-based interactive game design was developed and used with college freshman and junior/senior high school students. The goal of this approach was to demonstrate how to model a lab-scale experiment, and use the results to design and operate a chemical reactor. The game incorporates both…

  7. What Major Search Engines Like Google, Yahoo and Bing Need to Know about Teachers in the UK?

    ERIC Educational Resources Information Center

    Seyedarabi, Faezeh

    2014-01-01

    This article briefly outlines the current major search engines' approach to teachers' web searching. The aim of this article is to make Web searching easier for teachers when searching for relevant online teaching materials, in general, and UK teacher practitioners at primary, secondary and post-compulsory levels, in particular. Therefore, major…

  8. How To Do Field Searching in Web Search Engines: A Field Trip.

    ERIC Educational Resources Information Center

    Hock, Ran

    1998-01-01

    Describes the field search capabilities of selected Web search engines (AltaVista, HotBot, Infoseek, Lycos, Yahoo!) and includes a chart outlining what fields (date, title, URL, images, audio, video, links, page depth) are searchable, where to go on the page to search them, the syntax required (if any), and how field search queries are entered.…

  9. PipelineDog: a simple and flexible graphic pipeline construction and maintenance tool.

    PubMed

    Zhou, Anbo; Zhang, Yeting; Sun, Yazhou; Xing, Jinchuan

    2018-05-01

    Analysis pipelines are an essential part of bioinformatics research, and ad hoc pipelines are frequently created by researchers for prototyping and proof-of-concept purposes. However, most existing pipeline management system or workflow engines are too complex for rapid prototyping or learning the pipeline concept. A lightweight, user-friendly and flexible solution is thus desirable. In this study, we developed a new pipeline construction and maintenance tool, PipelineDog. This is a web-based integrated development environment with a modern web graphical user interface. It offers cross-platform compatibility, project management capabilities, code formatting and error checking functions and an online repository. It uses an easy-to-read/write script system that encourages code reuse. With the online repository, it also encourages sharing of pipelines, which enhances analysis reproducibility and accountability. For most users, PipelineDog requires no software installation. Overall, this web application provides a way to rapidly create and easily manage pipelines. PipelineDog web app is freely available at http://web.pipeline.dog. The command line version is available at http://www.npmjs.com/package/pipelinedog and online repository at http://repo.pipeline.dog. ysun@kean.edu or xing@biology.rutgers.edu or ysun@diagnoa.com. Supplementary data are available at Bioinformatics online.

  10. An overview of the web-based Google Earth coincident imaging tool

    USGS Publications Warehouse

    Chander, Gyanesh; Kilough, B.; Gowda, S.

    2010-01-01

    The Committee on Earth Observing Satellites (CEOS) Visualization Environment (COVE) tool is a browser-based application that leverages Google Earth web to display satellite sensor coverage areas. The analysis tool can also be used to identify near simultaneous surface observation locations for two or more satellites. The National Aeronautics and Space Administration (NASA) CEOS System Engineering Office (SEO) worked with the CEOS Working Group on Calibration and Validation (WGCV) to develop the COVE tool. The CEOS member organizations are currently operating and planning hundreds of Earth Observation (EO) satellites. Standard cross-comparison exercises between multiple sensors to compare near-simultaneous surface observations and to identify corresponding image pairs are time-consuming and labor-intensive. COVE is a suite of tools that have been developed to make such tasks easier.

  11. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    USGS Publications Warehouse

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  12. Retrieving high-resolution images over the Internet from an anatomical image database

    NASA Astrophysics Data System (ADS)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  13. The charged particle accelerators subsystems modeling

    NASA Astrophysics Data System (ADS)

    Averyanov, G. P.; Kobylyatskiy, A. V.

    2017-01-01

    Presented web-based resource for information support the engineering, science and education in Electrophysics, containing web-based tools for simulation subsystems charged particle accelerators. Formulated the development motivation of Web-Environment for Virtual Electrophysical Laboratories. Analyzes the trends of designs the dynamic web-environments for supporting of scientific research and E-learning, within the framework of Open Education concept.

  14. Start Your Engines: Surfing with Search Engines for Kids.

    ERIC Educational Resources Information Center

    Byerly, Greg; Brodie, Carolyn S.

    1999-01-01

    Suggests that to be an effective educator and user of the Web it is essential to know the basics about search engines. Presents tips for using search engines. Describes several search engines for children and young adults, as well as some general filtered search engines for children. (AEF)

  15. Polymer-Polymer Bilayer Actuator

    NASA Technical Reports Server (NTRS)

    Su, Ji (Inventor); Harrison, Joycelyn S. (Inventor); St.Clair, Terry L. (Inventor)

    2003-01-01

    A device for providing an electromechanical response includes two polymeric webs bonded to each other along their lengths. At least one polymeric web is activated upon application thereto of an electric field and exhibits electrostriction by rotation of polar graft moieties within the polymeric web. In one embodiment, one of the two polymeric webs in an active web upon application thereto of the electric field, and the other polymeric web is a non-active web upon application thereto of the electric field. In another embodiment, both of the two polymeric webs are capable of being active webs upon application thereto of the electric field. However, these two polymeric webs are alternately activated and non-activated by the electric field.

  16. Informational content of official pharmaceutical industry web sites about treatments for erectile dysfunction.

    PubMed

    Waack, Katherine E; Ernst, Michael E; Graber, Mark A

    2004-12-01

    In the last 5 years, several treatments have become available for erectile dysfunction (ED). During this same period, consumer use of the Internet for health information has increased rapidly. In traditional direct-to-consumer advertisements, viewers are often referred to a pharmaceutical company Web site for further information. To evaluate the accessibility and informational content of 5 pharmaceutical company Web sites about ED treatments. Using 10 popular search engines and 1 specialized search engine, the accessibility of the official pharmaceutical company-sponsored Web site was determined by searching under brand and generic names. One company also manufactures an ED device; this site was also included. A structured, explicit review of information found on these sites was conducted. Of 110 searches (1 for each treatment, including corresponding generic drug name, using each search engine), 68 yielded the official pharmaceutical company Web site within the first 10 links. Removal of outliers (for both brand and generic name searches) resulted in 68 of 77 searches producing the pharmaceutical company Web site for the brand-name drug in the top 10 links. Although all pharmaceutical company Web sites contained general information on adverse effects and contraindications to use, only 2 sites gave actual percentages. Three sites provided references for their materials or discussed other treatment or drug options, while 4 of the sites contained profound advertising or emotive content. None mentioned cost of the therapy. The information contained on pharmaceutical company Web sites for ED treatments is superficial and aimed primarily at consumers. It is largely promotional and provides only limited information needed to effectively compare treatment options.

  17. The Role of the Web Server in a Capstone Web Application Course

    ERIC Educational Resources Information Center

    Umapathy, Karthikeyan; Wallace, F. Layne

    2010-01-01

    Web applications have become commonplace in the Information Systems curriculum. Much of the discussion about Web development for capstone courses has centered on the scripting tools. Very little has been discussed about different ways to incorporate the Web server into Web application development courses. In this paper, three different ways of…

  18. Aladin Lite: Embed your Sky in the Browser

    NASA Astrophysics Data System (ADS)

    Boch, T.; Fernique, P.

    2014-05-01

    I will introduce and describe Aladin Lite1, a lightweight interactive sky viewer running natively in the browser. The past five years have seen the emergence of powerful and complex web applications, thanks to major improvements in JavaScript engines and the advent of HTML5. At the same time, browser plugins Java applets, Flash, Silverlight) that were commonly used to run rich Internet applications are declining and are not well suited for mobile devices. The Aladin team took this opportunity to develop Aladin Lite, a lightweight version of Aladin geared towards simple visualization of a sky region. Relying on the widely supported HTML5 canvas element, it provides an intuitive user interface running on desktops and tablets. This first version allows one to interactively visualize multi-resolution HEALPix image and superimpose tabular data and footprints. Aladin Lite is easily embeddable on any web page and may be of interest for data providers which will be able to use it as an interactive previewer for their own image surveys, previously pre-processed as explained in details in the poster "Create & publish your Hierarchical Progressive Survey". I will present the main features of Aladin Lite as well as the JavaScript API which gives the building blocks to create rich interactions between a web page and Aladin Lite.

  19. New Jersey StreamStats: A web application for streamflow statistics and basin characteristics

    USGS Publications Warehouse

    Watson, Kara M.; Janowicz, Jon A.

    2017-08-02

    StreamStats is an interactive, map-based web application from the U.S. Geological Survey (USGS) that allows users to easily obtain streamflow statistics and watershed characteristics for both gaged and ungaged sites on streams throughout New Jersey. Users can determine flood magnitude and frequency, monthly flow-duration, monthly low-flow frequency statistics, and watershed characteristics for ungaged sites by selecting a point along a stream, or they can obtain this information for streamgages by selecting a streamgage location on the map. StreamStats provides several additional tools useful for water-resources planning and management, as well as for engineering purposes. StreamStats is available for most states and some river basins through a single web portal.Streamflow statistics for water resources professionals include the 1-percent annual chance flood flow (100-year peak flow) used to define flood plain areas and the monthly 7-day, 10-year low flow (M7D10Y) used in water supply management and studies of recreation, wildlife conservation, and wastewater dilution. Additionally, watershed or basin characteristics, including drainage area, percent area forested, and average percent of impervious areas, are commonly used in land-use planning and environmental assessments. These characteristics are easily derived through StreamStats.

  20. Towards a new generation of fibre optic chemical sensors based on spider silk threads

    NASA Astrophysics Data System (ADS)

    Hey Tow, Kenny; Chow, Desmond M.; Vollrath, Fritz; Dicaire, Isabelle; Gheysens, Tom; Thévenaz, Luc

    2017-04-01

    A spider uses up to seven different types of silk, all having specific functions, to build its web. For scientists, native silk - directly extracted from spiders - is a tough, biodegradable and biocompatible thread used mainly for tissue engineering and textile applications. Blessed with outstanding optical properties, this protein strand can also be used as an optical fibre and is, moreover, intrinsically sensitive to chemical compounds. In this communication, a pioneering proof-of-concept experiment using spider silk, in its pristine condition, as a new type of fibre-optic relative humidity sensor will be demonstrated and its potential for future applications discussed.

  1. Reconsidering the Rhizome: A Textual Analysis of Web Search Engines as Gatekeepers of the Internet

    NASA Astrophysics Data System (ADS)

    Hess, A.

    Critical theorists have often drawn from Deleuze and Guattari's notion of the rhizome when discussing the potential of the Internet. While the Internet may structurally appear as a rhizome, its day-to-day usage by millions via search engines precludes experiencing the random interconnectedness and potential democratizing function. Through a textual analysis of four search engines, I argue that Web searching has grown hierarchies, or "trees," that organize data in tracts of knowledge and place users in marketing niches rather than assist in the development of new knowledge.

  2. Modeling and Performance Simulation of the Mass Storage Network Environment

    NASA Technical Reports Server (NTRS)

    Kim, Chan M.; Sang, Janche

    2000-01-01

    This paper describes the application of modeling and simulation in evaluating and predicting the performance of the mass storage network environment. Network traffic is generated to mimic the realistic pattern of file transfer, electronic mail, and web browsing. The behavior and performance of the mass storage network and a typical client-server Local Area Network (LAN) are investigated by modeling and simulation. Performance characteristics in throughput and delay demonstrate the important role of modeling and simulation in network engineering and capacity planning.

  3. Using task analysis to improve the requirements elicitation in health information system.

    PubMed

    Teixeira, Leonor; Ferreira, Carlos; Santos, Beatriz Sousa

    2007-01-01

    This paper describes the application of task analysis within the design process of a Web-based information system for managing clinical information in hemophilia care, in order to improve the requirements elicitation and, consequently, to validate the domain model obtained in a previous phase of the design process (system analysis). The use of task analysis in this case proved to be a practical and efficient way to improve the requirements engineering process by involving users in the design process.

  4. PhytoCRISP-Ex: a web-based and stand-alone application to find specific target sequences for CRISPR/CAS editing.

    PubMed

    Rastogi, Achal; Murik, Omer; Bowler, Chris; Tirichine, Leila

    2016-07-01

    With the emerging interest in phytoplankton research, the need to establish genetic tools for the functional characterization of genes is indispensable. The CRISPR/Cas9 system is now well recognized as an efficient and accurate reverse genetic tool for genome editing. Several computational tools have been published allowing researchers to find candidate target sequences for the engineering of the CRISPR vectors, while searching possible off-targets for the predicted candidates. These tools provide built-in genome databases of common model organisms that are used for CRISPR target prediction. Although their predictions are highly sensitive, the applicability to non-model genomes, most notably protists, makes their design inadequate. This motivated us to design a new CRISPR target finding tool, PhytoCRISP-Ex. Our software offers CRIPSR target predictions using an extended list of phytoplankton genomes and also delivers a user-friendly standalone application that can be used for any genome. The software attempts to integrate, for the first time, most available phytoplankton genomes information and provide a web-based platform for Cas9 target prediction within them with high sensitivity. By offering a standalone version, PhytoCRISP-Ex maintains an independence to be used with any organism and widens its applicability in high throughput pipelines. PhytoCRISP-Ex out pars all the existing tools by computing the availability of restriction sites over the most probable Cas9 cleavage sites, which can be ideal for mutant screens. PhytoCRISP-Ex is a simple, fast and accurate web interface with 13 pre-indexed and presently updating phytoplankton genomes. The software was also designed as a UNIX-based standalone application that allows the user to search for target sequences in the genomes of a variety of other species.

  5. Artemis: Integrating Scientific Data on the Grid (Preprint)

    DTIC Science & Technology

    2004-07-01

    Theseus execution engine [Barish and Knoblock 03] to efficiently execute the generated datalog program. The Theseus execution engine has a wide...variety of operations to query databases, web sources, and web services. Theseus also contains a wide variety of relational operations, such as...selection, union, or projection. Furthermore, Theseus optimizes the execution of an integration plan by querying several data sources in parallel and

  6. Applying Semantic Web technologies to improve the retrieval, credibility and use of health-related web resources.

    PubMed

    Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela

    2011-06-01

    The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.

  7. A two-staged approach to developing and evaluating an ontology for delivering personalized education to diabetic patients.

    PubMed

    Quinn, Susan; Bond, Raymond; Nugent, Chris

    2018-09-01

    Ontologies are often used in biomedical and health domains to provide a concise and consistent means of attributing meaning to medical terminology. While they are novices in terms of ontology engineering, the evaluation of an ontology by domain specialists provides an opportunity to enhance its objectivity, accuracy, and coverage of the domain itself. This paper provides an evaluation of the viability of using ontology engineering novices to evaluate and enrich an ontology that can be used for personalized diabetic patient education. We describe a methodology for engaging healthcare and information technology specialists with a range of ontology engineering tasks. We used 87.8% of the data collected to validate the accuracy of our ontological model. The contributions also enabled a 16% increase in the class size and an 18% increase in object properties. Furthermore, we propose that ontology engineering novices can make valuable contributions to ontology development. Application-specific evaluation of the ontology using a semantic-web-based architecture is also discussed.

  8. Tethys: A Platform for Water Resources Modeling and Decision Support Apps

    NASA Astrophysics Data System (ADS)

    Swain, N. R.; Christensen, S. D.; Jones, N.; Nelson, E. J.

    2014-12-01

    Cloud-based applications or apps are a promising medium through which water resources models and data can be conveyed in a user-friendly environment—making them more accessible to decision-makers and stakeholders. In the context of this work, a water resources web app is a web application that exposes limited modeling functionality for a scenario exploration activity in a structured workflow (e.g.: land use change runoff analysis, snowmelt runoff prediction, and flood potential analysis). The technical expertise required to develop water resources web apps can be a barrier to many potential developers of water resources apps. One challenge that developers face is in providing spatial storage, analysis, and visualization for the spatial data that is inherent to water resources models. The software projects that provide this functionality are non-standard to web development and there are a large number of free and open source software (FOSS) projects to choose from. In addition, it is often required to synthesize several software projects to provide all of the needed functionality. Another challenge for the developer will be orchestrating the use of several software components. Consequently, the initial software development investment required to deploy an effective water resources cloud-based application can be substantial. The Tethys Platform has been developed to lower the technical barrier and minimize the initial development investment that prohibits many scientists and engineers from making use of the web app medium. Tethys synthesizes several software projects including PostGIS for spatial storage, 52°North WPS for spatial analysis, GeoServer for spatial publishing, Google Earth™, Google Maps™ and OpenLayers for spatial visualization, and Highcharts for plotting tabular data. The software selection came after a literature review of software projects being used to create existing earth sciences web apps. All of the software is linked via a Python-powered software development kit (SDK). Tethys developers use the SDK to build their apps and incorporate the needed functionality from the software suite. The presentation will include several apps that have been developed using Tethys to demonstrate its capabilities. Based upon work supported by the National Science Foundation under Grant No. 1135483.

  9. Finding Specification Pages from the Web

    NASA Astrophysics Data System (ADS)

    Yoshinaga, Naoki; Torisawa, Kentaro

    This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.

  10. TermGenie – a web-application for pattern-based ontology class generation

    DOE PAGES

    Dietze, Heiko; Berardini, Tanya Z.; Foulger, Rebecca E.; ...

    2014-01-01

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 newmore » classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. Lastly, TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.« less

  11. TermGenie – a web-application for pattern-based ontology class generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietze, Heiko; Berardini, Tanya Z.; Foulger, Rebecca E.

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 newmore » classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. Lastly, TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.« less

  12. TermGenie - a web-application for pattern-based ontology class generation.

    PubMed

    Dietze, Heiko; Berardini, Tanya Z; Foulger, Rebecca E; Hill, David P; Lomax, Jane; Osumi-Sutherland, David; Roncaglia, Paola; Mungall, Christopher J

    2014-01-01

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 new classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.

  13. [Design of visualized medical images network and web platform based on MeVisLab].

    PubMed

    Xiang, Jun; Ye, Qing; Yuan, Xun

    2017-04-01

    With the trend of the development of "Internet +", some further requirements for the mobility of medical images have been required in the medical field. In view of this demand, this paper presents a web-based visual medical imaging platform. First, the feasibility of medical imaging is analyzed and technical points. CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) images are reconstructed three-dimensionally by MeVisLab and packaged as X3D (Extensible 3D Graphics) files shown in the present paper. Then, the B/S (Browser/Server) system specially designed for 3D image is designed by using the HTML 5 and WebGL rendering engine library, and the X3D image file is parsed and rendered by the system. The results of this study showed that the platform was suitable for multiple operating systems to realize the platform-crossing and mobilization of medical image data. The development of medical imaging platform is also pointed out in this paper. It notes that web application technology will not only promote the sharing of medical image data, but also facilitate image-based medical remote consultations and distance learning.

  14. Electrohydrodynamic spinning of random-textured silver webs for electrodes embedded in flexible organic solar cells

    NASA Astrophysics Data System (ADS)

    Yoon, Dai Geon; Chin, Byung Doo; Bail, Robert

    2017-03-01

    A convenient process for fabricating a transparent conducting electrode on a flexible substrate is essential for numerous low-cost optoelectronic devices, including organic solar cells (OSCs), touch sensors, and free-form lighting applications. Solution-processed metal-nanowire arrays are attractive due to their low sheet resistance and optical clarity. However, the limited conductance at wire junctions and the rough surface topology still need improvement. Here, we present a facile process of electrohydrodynamic spinning using a silver (Ag) - polymer composite paste with high viscosity. Unlike the metal-nanofiber web formed by conventional electrospinning, a relatively thick, but still invisible-to-naked eye, Ag-web random pattern was formed on a glass substrate. The process parameters such as the nozzle diameter, voltage, flow rate, standoff height, and nozzle-scanning speed, were systematically engineered. The formed random texture Ag webs were embedded in a flexible substrate by in-situ photo-polymerization, release from the glass substrate, and post-annealing. OSCs with a donor-acceptor polymeric heterojunction photoactive layer were prepared on the Ag-web-embedded flexible films with various Ag-web densities. The short-circuit current and the power conversion efficiency of an OSC with a Ag-web-embedded electrode were not as high as those of the control sample with an indium-tin-oxide electrode. However, the Ag-web textures embedded in the OSC served well as electrodes when bent (6-mm radius), showing a power conversion efficiency of 2.06% (2.72% for the flat OSC), and the electrical stability of the Ag-web-textured patterns was maintained for up to 1,000 cycles of bending.

  15. XMM-Newton Mobile Web Application

    NASA Astrophysics Data System (ADS)

    Ibarra, A.; Kennedy, M.; Rodríguez, P.; Hernández, C.; Saxton, R.; Gabriel, C.

    2013-10-01

    We present the first XMM-Newton web mobile application, coded using new web technologies such as HTML5, the Query mobile framework, and D3 JavaScript data-driven library. This new web mobile application focuses on re-formatted contents extracted directly from the XMM-Newton web, optimizing the contents for mobile devices. The main goals of this development were to reach all kind of handheld devices and operating systems, while minimizing software maintenance. The application therefore has been developed as a web mobile implementation rather than a more costly native application. New functionality will be added regularly.

  16. Ajax Architecture Implementation Techniques

    NASA Astrophysics Data System (ADS)

    Hussaini, Syed Asadullah; Tabassum, S. Nasira; Baig, Tabassum, M. Khader

    2012-03-01

    Today's rich Web applications use a mix of Java Script and asynchronous communication with the application server. This mechanism is also known as Ajax: Asynchronous JavaScript and XML. The intent of Ajax is to exchange small pieces of data between the browser and the application server, and in doing so, use partial page refresh instead of reloading the entire Web page. AJAX (Asynchronous JavaScript and XML) is a powerful Web development model for browser-based Web applications. Technologies that form the AJAX model, such as XML, JavaScript, HTTP, and XHTML, are individually widely used and well known. However, AJAX combines these technologies to let Web pages retrieve small amounts of data from the server without having to reload the entire page. This capability makes Web pages more interactive and lets them behave like local applications. Web 2.0 enabled by the Ajax architecture has given rise to a new level of user interactivity through web browsers. Many new and extremely popular Web applications have been introduced such as Google Maps, Google Docs, Flickr, and so on. Ajax Toolkits such as Dojo allow web developers to build Web 2.0 applications quickly and with little effort.

  17. JACOB: an enterprise framework for computational chemistry.

    PubMed

    Waller, Mark P; Dresselhaus, Thomas; Yang, Jack

    2013-06-15

    Here, we present just a collection of beans (JACOB): an integrated batch-based framework designed for the rapid development of computational chemistry applications. The framework expedites developer productivity by handling the generic infrastructure tier, and can be easily extended by user-specific scientific code. Paradigms from enterprise software engineering were rigorously applied to create a scalable, testable, secure, and robust framework. A centralized web application is used to configure and control the operation of the framework. The application-programming interface provides a set of generic tools for processing large-scale noninteractive jobs (e.g., systematic studies), or for coordinating systems integration (e.g., complex workflows). The code for the JACOB framework is open sourced and is available at: www.wallerlab.org/jacob. Copyright © 2013 Wiley Periodicals, Inc.

  18. Recent advancements on the development of web-based applications for the implementation of seismic analysis and surveillance systems

    NASA Astrophysics Data System (ADS)

    Friberg, P. A.; Luis, R. S.; Quintiliani, M.; Lisowski, S.; Hunter, S.

    2014-12-01

    Recently, a novel set of modules has been included in the Open Source Earthworm seismic data processing system, supporting the use of web applications. These include the Mole sub-system, for storing relevant event data in a MySQL database (see M. Quintiliani and S. Pintore, SRL, 2013), and an embedded webserver, Moleserv, for serving such data to web clients in QuakeML format. These modules have enabled, for the first time using Earthworm, the use of web applications for seismic data processing. These can greatly simplify the operation and maintenance of seismic data processing centers by having one or more servers providing the relevant data as well as the data processing applications themselves to client machines running arbitrary operating systems.Web applications with secure online web access allow operators to work anywhere, without the often cumbersome and bandwidth hungry use of secure shell or virtual private networks. Furthermore, web applications can seamlessly access third party data repositories to acquire additional information, such as maps. Finally, the usage of HTML email brought the possibility of specialized web applications, to be used in email clients. This is the case of EWHTMLEmail, which produces event notification emails that are in fact simple web applications for plotting relevant seismic data.Providing web services as part of Earthworm has enabled a number of other tools as well. One is ISTI's EZ Earthworm, a web based command and control system for an otherwise command line driven system; another is a waveform web service. The waveform web service serves Earthworm data to additional web clients for plotting, picking, and other web-based processing tools. The current Earthworm waveform web service hosts an advanced plotting capability for providing views of event-based waveforms from a Mole database served by Moleserve.The current trend towards the usage of cloud services supported by web applications is driving improvements in JavaScript, css and HTML, as well as faster and more efficient web browsers, including mobile. It is foreseeable that in the near future, web applications are as powerful and efficient as native applications. Hence the work described here has been the first step towards bringing the Open Source Earthworm seismic data processing system to this new paradigm.

  19. Merlin: Computer-Aided Oligonucleotide Design for Large Scale Genome Engineering with MAGE.

    PubMed

    Quintin, Michael; Ma, Natalie J; Ahmed, Samir; Bhatia, Swapnil; Lewis, Aaron; Isaacs, Farren J; Densmore, Douglas

    2016-06-17

    Genome engineering technologies now enable precise manipulation of organism genotype, but can be limited in scalability by their design requirements. Here we describe Merlin ( http://merlincad.org ), an open-source web-based tool to assist biologists in designing experiments using multiplex automated genome engineering (MAGE). Merlin provides methods to generate pools of single-stranded DNA oligonucleotides (oligos) for MAGE experiments by performing free energy calculation and BLAST scoring on a sliding window spanning the targeted site. These oligos are designed not only to improve recombination efficiency, but also to minimize off-target interactions. The application further assists experiment planning by reporting predicted allelic replacement rates after multiple MAGE cycles, and enables rapid result validation by generating primer sequences for multiplexed allele-specific colony PCR. Here we describe the Merlin oligo and primer design procedures and validate their functionality compared to OptMAGE by eliminating seven AvrII restriction sites from the Escherichia coli genome.

  20. Electronic Biomedical Literature Search for Budding Researcher

    PubMed Central

    Thakre, Subhash B.; Thakre S, Sushama S.; Thakre, Amol D.

    2013-01-01

    Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research. PMID:24179937

  1. Electronic biomedical literature search for budding researcher.

    PubMed

    Thakre, Subhash B; Thakre S, Sushama S; Thakre, Amol D

    2013-09-01

    Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research.

  2. Getting To Know the "Invisible Web."

    ERIC Educational Resources Information Center

    Smith, C. Brian

    2001-01-01

    Discusses the portions of the World Wide Web that cannot be accessed via directories or search engines, explains why they can't be accessed, and offers suggestions for reference librarians to find these sites. Lists helpful resources and gives examples of invisible Web sites which are often databases. (LRW)

  3. Quality of Web-Based Information on Cannabis Addiction

    ERIC Educational Resources Information Center

    Khazaal, Yasser; Chatton, Anne; Cochand, Sophie; Zullino, Daniele

    2008-01-01

    This study evaluated the quality of Web-based information on cannabis use and addiction and investigated particular content quality indicators. Three keywords ("cannabis addiction," "cannabis dependence," and "cannabis abuse") were entered into two popular World Wide Web search engines. Websites were assessed with a standardized proforma designed…

  4. Content Documents Management

    NASA Technical Reports Server (NTRS)

    Muniz, R.; Hochstadt, J.; Boelke J.; Dalton, A.

    2011-01-01

    The Content Documents are created and managed under the System Software group with. Launch Control System (LCS) project. The System Software product group is lead by NASA Engineering Control and Data Systems branch (NEC3) at Kennedy Space Center. The team is working on creating Operating System Images (OSI) for different platforms (i.e. AIX, Linux, Solaris and Windows). Before the OSI can be created, the team must create a Content Document which provides the information of a workstation or server, with the list of all the software that is to be installed on it and also the set where the hardware belongs. This can be for example in the LDS, the ADS or the FR-l. The objective of this project is to create a User Interface Web application that can manage the information of the Content Documents, with all the correct validations and filters for administrator purposes. For this project we used one of the most excellent tools in agile development applications called Ruby on Rails. This tool helps pragmatic programmers develop Web applications with Rails framework and Ruby programming language. It is very amazing to see how a student can learn about OOP features with the Ruby language, manage the user interface with HTML and CSS, create associations and queries with gems, manage databases and run a server with MYSQL, run shell commands with command prompt and create Web frameworks with Rails. All of this in a real world project and in just fifteen weeks!

  5. Interactive Visualization of Parking Orbits Around the Moon: An X3D Application for a NASA Lunar Mission Study

    NASA Technical Reports Server (NTRS)

    Murphy, Douglas G.; Qu, Min; Salas, Andrea O.

    2006-01-01

    The NASA Integrated Modeling and Simulation (IM&S) project aims to develop a collaborative engineering system to include distributed analysis, integrated tools, and web-enabled graphics. Engineers on the IM&S team were tasked with applying IM&S capabilities to an orbital mechanics analysis for a lunar mission study. An interactive lunar globe was created to show 7 landing sites, contour lines depicting the energy required to reach a given site, and the optimal lunar orbit orientation to meet the mission constraints. Activation of the lunar globe rotation shows the change of the angle between the landing site latitude and the orbit plane. A heads-up-display was used to embed straightforward interface elements.

  6. Nuclease Target Site Selection for Maximizing On-target Activity and Minimizing Off-target Effects in Genome Editing

    PubMed Central

    Lee, Ciaran M; Cradick, Thomas J; Fine, Eli J; Bao, Gang

    2016-01-01

    The rapid advancement in targeted genome editing using engineered nucleases such as ZFNs, TALENs, and CRISPR/Cas9 systems has resulted in a suite of powerful methods that allows researchers to target any genomic locus of interest. A complementary set of design tools has been developed to aid researchers with nuclease design, target site selection, and experimental validation. Here, we review the various tools available for target selection in designing engineered nucleases, and for quantifying nuclease activity and specificity, including web-based search tools and experimental methods. We also elucidate challenges in target selection, especially in predicting off-target effects, and discuss future directions in precision genome editing and its applications. PMID:26750397

  7. A unified architecture for biomedical search engines based on semantic web technologies.

    PubMed

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  8. Web data mining

    NASA Astrophysics Data System (ADS)

    Wibonele, Kasanda J.; Zhang, Yanqing

    2002-03-01

    A web data mining system using granular computing and ASP programming is proposed. This is a web based application, which allows web users to submit survey data for many different companies. This survey is a collection of questions that will help these companies develop and improve their business and customer service with their clients by analyzing survey data. This web application allows users to submit data anywhere. All the survey data is collected into a database for further analysis. An administrator of this web application can login to the system and view all the data submitted. This web application resides on a web server, and the database resides on the MS SQL server.

  9. Development of Health Information Search Engine Based on Metadata and Ontology

    PubMed Central

    Song, Tae-Min; Jin, Dal-Lae

    2014-01-01

    Objectives The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Methods Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. Results A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Conclusions Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers. PMID:24872907

  10. Development of health information search engine based on metadata and ontology.

    PubMed

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  11. 77 FR 74278 - Proposed Information Collection (Internet Student CPR Web Registration Application); Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-13

    ... (Internet Student CPR Web Registration Application); Comment Request AGENCY: Veterans Health Administration... web registration application. DATES: Written comments and recommendations on the proposed collection.... Title: Internet Student CPR Web Registration Application, VA Form 10-0468. OMB Control Number: 2900-0746...

  12. 78 FR 8108 - NextGen Solutions Vendors Guide

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-05

    ... Commerce is developing a web-based NextGen Solutions Vendors Guide intended to be used by foreign air... being listed on the Vendors Guide Web site should submit their company's name, Web site address, contact... to aviation system upgrades) Example: Engineering Services More information on the four ICAO ASBU...

  13. Hot Topics on the Web: Strategies for Research.

    ERIC Educational Resources Information Center

    Diaz, Karen R.; O'Hanlon, Nancy

    2001-01-01

    Presents strategies for researching topics on the Web that are controversial or current in nature. Discusses topic selection and overviews, including the use of online encyclopedias; search engines; finding laws and pending legislation; advocacy groups; proprietary databases; Web site evaluation; and the continuing usefulness of print materials.…

  14. The Implementation of Web Conferencing Technologies in Online Graduate Classes

    ERIC Educational Resources Information Center

    Zotti, Robert

    2017-01-01

    This dissertation examines the implementation of web conferencing technology in online graduate courses within management, engineering, and computer science programs. Though the spread of learning management systems over the past two decades has been dramatic, the use of web conferencing technologies has curiously lagged. The real-time…

  15. Social Networking on the Semantic Web

    ERIC Educational Resources Information Center

    Finin, Tim; Ding, Li; Zhou, Lina; Joshi, Anupam

    2005-01-01

    Purpose: Aims to investigate the way that the semantic web is being used to represent and process social network information. Design/methodology/approach: The Swoogle semantic web search engine was used to construct several large data sets of Resource Description Framework (RDF) documents with social network information that were encoded using the…

  16. Analyzing Web Server Logs to Improve a Site's Usage. The Systems Librarian

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2005-01-01

    This column describes ways to streamline and optimize how a Web site works in order to improve both its usability and its visibility. The author explains how to analyze logs and other system data to measure the effectiveness of the Web site design and search engine.

  17. SSE-GIS v1.03 Web Mapping Application Now Available

    Atmospheric Science Data Center

    2018-03-16

    SSE-GIS v1.03 Web Mapping Application Now Available Wednesday, July 6, 2016 ... you haven’t already noticed the link to the new SSE-GIS web application on the SSE homepage entitled “GIS Web Mapping Applications and Services”, we invite you to visit the site. ...

  18. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    NASA Astrophysics Data System (ADS)

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  19. Efficient Web Vulnerability Detection Tool for Sleeping Giant-Cross Site Request Forgery

    NASA Astrophysics Data System (ADS)

    Parimala, G.; Sangeetha, M.; AndalPriyadharsini, R.

    2018-04-01

    Now day’s web applications are very high in the rate of usage due to their user friendly environment and getting any information via internet but these web applications are affected by lot of threats. CSRF attack is one of the serious threats to web applications which is based on the vulnerabilities present in the normal web request and response of HTTP protocol. It is hard to detect but hence still it is present in most of the existing web applications. In CSRF attack, without user knowledge the unwanted actions on a reliable websites are forced to happen. So it is placed in OWASP’s top 10 Web Application attacks list. My proposed work is to do a real time scan of CSRF vulnerability attack in given URL of the web applications as well as local host address for any organization using python language. Client side detection of CSRF is depended on Form count which is presented in that given web site.

  20. Can people find patient decision aids on the Internet?

    PubMed

    Morris, Debra; Drake, Elizabeth; Saarimaki, Anton; Bennett, Carol; O'Connor, Annette

    2008-12-01

    To determine if people could find patient decision aids (PtDAs) on the Internet using the most popular general search engines. We chose five medical conditions for which English language PtDAs were available from at least three different developers. The search engines used were: Google (www.google.com), Yahoo! (www.yahoo.com), and MSN (www.msn.com). For each condition and search engine we ran six searches using a combination of search terms. We coded all non-sponsored Web pages that were linked from the first page of the search results. Most first page results linked to informational Web pages about the condition, only 16% linked to PtDAs. PtDAs were more readily found for the breast cancer surgery decision (our searches found seven of the nine developers). The searches using Yahoo and Google search engines were more likely to find PtDAs. The following combination of search terms: condition, treatment, decision (e.g. breast cancer surgery decision) was most successful across all search engines (29%). While some terms and search engines were more successful, few resulted in direct links to PtDAs. Finding PtDAs would be improved with use of standardized labelling, providing patients with specific Web site addresses or access to an independent PtDA clearinghouse.

  1. IBM techexplorer and MathML: Interactive Multimodal Scientific Documents

    NASA Astrophysics Data System (ADS)

    Diaz, Angel

    2001-06-01

    The World Wide Web provides a standard publishing platform for disseminating scientific and technical articles, books, journals, courseware, or even homework on the internet; however, the transition from paper to web-based interactive content has brought new opportunities for creating interactive content. Students, scientists, and engineers are now faced with the task of rendering the 2D presentational structure of mathematics, harnessing the wealth of scientific and technical software, and creating truly accessible scientific portals across international boundaries and markets. The recent emergence of World Wide Web Consortium (W3C) standards such as the Mathematical Markup Language (MathML), Language (XSL), and Aural CSS (ACSS) provide a foundation whereby mathematics can be displayed, enlivened, computed, and audio formatted. With interoperability ensured by standards, software applications can be easily brought together to create extensible and interactive scientific content. In this presentation we will provide an overview of the IBM techexplorer Hypermedia Browser, a web browser plug-in and ActiveX control aimed at bringing interactive mathematics to the masses across platforms and applications. We will demonstrate "live" mathematics where documents that contain MathML expressions can be edited and computed right inside your favorite web browser. This demonstration will be generalized as we show how MathML can be used to enliven even PowerPoint presentations. Finally, we will close the loop by demonstrating a novel approach to spoken mathematics based on MathML, DOM, XSL, ACSS, techexplorer, and IBM ViaVoice. By making use of techexplorer as the glue that binds the rendered content to the web browser, the back-end computation software, the Java applets that augment the exposition, and voice-rendering systems such as ViaVoice, authors can indeed create truly extensible and interactive scientific content. For more information see: [http://www.software.ibm.com/techexplorer] [http://www.alphaworks.ibm.com] [http://www.w3.org

  2. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    PubMed

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  3. Real-Time Payload Control and Monitoring on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Sun, Charles; Windrem, May; Givens, John J. (Technical Monitor)

    1998-01-01

    World Wide Web (W3) technologies such as the Hypertext Transfer Protocol (HTTP) and the Java object-oriented programming environment offer a powerful, yet relatively inexpensive, framework for distributed application software development. This paper describes the design of a real-time payload control and monitoring system that was developed with W3 technologies at NASA Ames Research Center. Based on Java Development Toolkit (JDK) 1.1, the system uses an event-driven "publish and subscribe" approach to inter-process communication and graphical user-interface construction. A C Language Integrated Production System (CLIPS) compatible inference engine provides the back-end intelligent data processing capability, while Oracle Relational Database Management System (RDBMS) provides the data management function. Preliminary evaluation shows acceptable performance for some classes of payloads, with Java's portability and multimedia support identified as the most significant benefit.

  4. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs.

    PubMed

    Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.

  5. Web-based software tool for constraint-based design specification of synthetic biological systems.

    PubMed

    Oberortner, Ernst; Densmore, Douglas

    2015-06-19

    miniEugene provides computational support for solving combinatorial design problems, enabling users to specify and enumerate designs for novel biological systems based on sets of biological constraints. This technical note presents a brief tutorial for biologists and software engineers in the field of synthetic biology on how to use miniEugene. After reading this technical note, users should know which biological constraints are available in miniEugene, understand the syntax and semantics of these constraints, and be able to follow a step-by-step guide to specify the design of a classical synthetic biological system-the genetic toggle switch.1 We also provide links and references to more information on the miniEugene web application and the integration of the miniEugene software library into sophisticated Computer-Aided Design (CAD) tools for synthetic biology ( www.eugenecad.org ).

  6. Software-supported USER cloning strategies for site-directed mutagenesis and DNA assembly.

    PubMed

    Genee, Hans Jasper; Bonde, Mads Tvillinggaard; Bagger, Frederik Otzen; Jespersen, Jakob Berg; Sommer, Morten O A; Wernersson, Rasmus; Olsen, Lars Rønn

    2015-03-20

    USER cloning is a fast and versatile method for engineering of plasmid DNA. We have developed a user friendly Web server tool that automates the design of optimal PCR primers for several distinct USER cloning-based applications. Our Web server, named AMUSER (Automated DNA Modifications with USER cloning), facilitates DNA assembly and introduction of virtually any type of site-directed mutagenesis by designing optimal PCR primers for the desired genetic changes. To demonstrate the utility, we designed primers for a simultaneous two-position site-directed mutagenesis of green fluorescent protein (GFP) to yellow fluorescent protein (YFP), which in a single step reaction resulted in a 94% cloning efficiency. AMUSER also supports degenerate nucleotide primers, single insert combinatorial assembly, and flexible parameters for PCR amplification. AMUSER is freely available online at http://www.cbs.dtu.dk/services/AMUSER/.

  7. WebViz: A web browser based application for collaborative analysis of 3D data

    NASA Astrophysics Data System (ADS)

    Ruegg, C. S.

    2011-12-01

    In the age of high speed Internet where people can interact instantly, scientific tools have lacked technology which can incorporate this concept of communication using the web. To solve this issue a web application for geological studies has been created, tentatively titled WebViz. This web application utilizes tools provided by Google Web Toolkit to create an AJAX web application capable of features found in non web based software. Using these tools, a web application can be created to act as piece of software from anywhere in the globe with a reasonably speedy Internet connection. An application of this technology can be seen with data regarding the recent tsunami from the major japan earthquakes. After constructing the appropriate data to fit a computer render software called HVR, WebViz can request images of the tsunami data and display it to anyone who has access to the application. This convenience alone makes WebViz a viable solution, but the option to interact with this data with others around the world causes WebViz to be taken as a serious computational tool. WebViz also can be used on any javascript enabled browser such as those found on modern tablets and smart phones over a fast wireless connection. Due to the fact that WebViz's current state is built using Google Web Toolkit the portability of the application is in it's most efficient form. Though many developers have been involved with the project, each person has contributed to increase the usability and speed of the application. In the project's most recent form a dramatic speed increase has been designed as well as a more efficient user interface. The speed increase has been informally noticed in recent uses of the application in China and Australia with the hosting server being located at the University of Minnesota. The user interface has been improved to not only look better but the functionality has been improved. Major functions of the application are rotating the 3D object using buttons. These buttons have been replaced with a new layout that is easier to understand the function and is also easy to use with mobile devices. With these new changes, WebViz is easier to control and use for general use.

  8. A Comparison between Quantity Surveying and Information Technology Students on Web Application in Learning Process

    ERIC Educational Resources Information Center

    Keng, Tan Chin; Ching, Yeoh Kah

    2015-01-01

    The use of web applications has become a trend in many disciplines including education. In view of the influence of web application in education, this study examines web application technologies that could enhance undergraduates' learning experiences, with focus on Quantity Surveying (QS) and Information Technology (IT) undergraduates. The…

  9. 'Sciencenet'--towards a global search and share engine for all scientific knowledge.

    PubMed

    Lütjohann, Dominic S; Shah, Asmi H; Christen, Michael P; Richter, Florian; Knese, Karsten; Liebel, Urban

    2011-06-15

    Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, 'Sciencenet', which facilitates rapid searching over this large data space. By 'bringing the search engine to the data', we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the 'AskMe' experiment publisher is written in Python 2.7, and the backend 'YaCy' search engine is based on Java 1.6.

  10. An open source web interface for linking models to infrastructure system databases

    NASA Astrophysics Data System (ADS)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  11. Tracing medical information over the Internet.

    PubMed

    Mutairi, S M

    2000-05-01

    The Internet became with do doubt a huge and valuable source of information for researchers. The wealth of information on the Internet is second to none and medical information is no exception. Yet with the vast expansion of the Internet and the World Wide Web in specie, to find the kind of information one is looking for, he/she needs to browse thousands of web sites and the experience would be like digging into a stack of hay looking for a needle. That's why search engines and subject indexes, as means to overcome this problem, were introduced and grew so rapidly. In general, there are three approaches to retrieve data from the World Wide Web; the subject directories, search engines and detailed subject indexes. However, there is no single comprehensive search engine or directory and it is recommended to use more than one with different keywords and synonymous.

  12. Development of wide area environment accelerator operation and diagnostics method

    NASA Astrophysics Data System (ADS)

    Uchiyama, Akito; Furukawa, Kazuro

    2015-08-01

    Remote operation and diagnostic systems for particle accelerators have been developed for beam operation and maintenance in various situations. Even though fully remote experiments are not necessary, the remote diagnosis and maintenance of the accelerator is required. Considering remote-operation operator interfaces (OPIs), the use of standard protocols such as the hypertext transfer protocol (HTTP) is advantageous, because system-dependent protocols are unnecessary between the remote client and the on-site server. Here, we have developed a client system based on WebSocket, which is a new protocol provided by the Internet Engineering Task Force for Web-based systems, as a next-generation Web-based OPI using the Experimental Physics and Industrial Control System Channel Access protocol. As a result of this implementation, WebSocket-based client systems have become available for remote operation. Also, as regards practical application, the remote operation of an accelerator via a wide area network (WAN) faces a number of challenges, e.g., the accelerator has both experimental device and radiation generator characteristics. Any error in remote control system operation could result in an immediate breakdown. Therefore, we propose the implementation of an operator intervention system for remote accelerator diagnostics and support that can obviate any differences between the local control room and remote locations. Here, remote-operation Web-based OPIs, which resolve security issues, are developed.

  13. OLIVER: an online library of images for veterinary education and research.

    PubMed

    McGreevy, Paul; Shaw, Tim; Burn, Daniel; Miller, Nick

    2007-01-01

    As part of a strategic move by the University of Sydney toward increased flexibility in learning, the Faculty of Veterinary Science undertook a number of developments involving Web-based teaching and assessment. OLIVER underpins them by providing a rich, durable repository for learning objects. To integrate Web-based learning, case studies, and didactic presentations for veterinary and animal science students, we established an online library of images and other learning objects for use by academics in the Faculties of Veterinary Science and Agriculture. The objectives of OLIVER were to maximize the use of the faculty's teaching resources by providing a stable archiving facility for graphic images and other multimedia learning objects that allows flexible and precise searching, integrating indexing standards, thesauri, pull-down lists of preferred terms, and linking of objects within cases. OLIVER offers a portable and expandable Web-based shell that facilitates ongoing storage of learning objects in a range of media. Learning objects can be downloaded in common, standardized formats so that they can be easily imported for use in a range of applications, including Microsoft PowerPoint, WebCT, and Microsoft Word. OLIVER now contains more than 9,000 images relating to many facets of veterinary science; these are annotated and supported by search engines that allow rapid access to both images and relevant information. The Web site is easily updated and adapted as required.

  14. Just-in-time Database-Driven Web Applications

    PubMed Central

    2003-01-01

    "Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109

  15. MPA Portable: A Stand-Alone Software Package for Analyzing Metaproteome Samples on the Go.

    PubMed

    Muth, Thilo; Kohrs, Fabian; Heyer, Robert; Benndorf, Dirk; Rapp, Erdmann; Reichl, Udo; Martens, Lennart; Renard, Bernhard Y

    2018-01-02

    Metaproteomics, the mass spectrometry-based analysis of proteins from multispecies samples faces severe challenges concerning data analysis and results interpretation. To overcome these shortcomings, we here introduce the MetaProteomeAnalyzer (MPA) Portable software. In contrast to the original server-based MPA application, this newly developed tool no longer requires computational expertise for installation and is now independent of any relational database system. In addition, MPA Portable now supports state-of-the-art database search engines and a convenient command line interface for high-performance data processing tasks. While search engine results can easily be combined to increase the protein identification yield, an additional two-step workflow is implemented to provide sufficient analysis resolution for further postprocessing steps, such as protein grouping as well as taxonomic and functional annotation. Our new application has been developed with a focus on intuitive usability, adherence to data standards, and adaptation to Web-based workflow platforms. The open source software package can be found at https://github.com/compomics/meta-proteome-analyzer .

  16. Mining Hidden Gems Beneath the Surface: A Look At the Invisible Web.

    ERIC Educational Resources Information Center

    Carlson, Randal D.; Repman, Judi

    2002-01-01

    Describes resources for researchers called the Invisible Web that are hidden from the usual search engines and other tools and contrasts them with those resources available on the surface Web. Identifies specialized search tools, databases, and strategies that can be used to locate credible in-depth information. (Author/LRW)

  17. Use of Web Search Engines and Personalisation in Information Searching for Educational Purposes

    ERIC Educational Resources Information Center

    Salehi, Sara; Du, Jia Tina; Ashman, Helen

    2018-01-01

    Introduction: Students increasingly depend on Web search for educational purposes. This causes concerns among education providers as some evidence indicates that in higher education, the disadvantages of Web search and personalised information are not justified by the benefits. Method: One hundred and twenty university students were surveyed about…

  18. Noise and Vibration Risk Prevention Virtual Web for Ubiquitous Training

    ERIC Educational Resources Information Center

    Redel-Macías, María Dolores; Cubero-Atienza, Antonio J.; Martínez-Valle, José Miguel; Pedrós-Pérez, Gerardo; del Pilar Martínez-Jiménez, María

    2015-01-01

    This paper describes a new Web portal offering experimental labs for ubiquitous training of university engineering students in work-related risk prevention. The Web-accessible computer program simulates the noise and machine vibrations met in the work environment, in a series of virtual laboratories that mimic an actual laboratory and provide the…

  19. 10 Ways To Take Charge of the Web. Easy Strategies for Internet Smarts.

    ERIC Educational Resources Information Center

    Wood, Julie M.

    2000-01-01

    Strategies to help teachers use the Internet effectively include: explore individual interests online; develop acceptable use policies; narrow the playing field; know search engines; use filters; utilize the World Wide Web to lighten the load; teach students to investigate websites effectively; use the Web for professional development; teach…

  20. Multitasking Web Searching and Implications for Design.

    ERIC Educational Resources Information Center

    Ozmutlu, Seda; Ozmutlu, H. C.; Spink, Amanda

    2003-01-01

    Findings from a study of users' multitasking searches on Web search engines include: multitasking searches are a noticeable user behavior; multitasking search sessions are longer than regular search sessions in terms of queries per session and duration; both Excite and AlltheWeb.com users search for about three topics per multitasking session and…

  1. Development and Evaluation of Mechatronics Learning System in a Web-Based Environment

    ERIC Educational Resources Information Center

    Shyr, Wen-Jye

    2011-01-01

    The development of remote laboratory suitable for the reinforcement of undergraduate level teaching of mechatronics is important. For the reason, a Web-based mechatronics learning system, called the RECOLAB (REmote COntrol LABoratory), for remote learning in engineering education has been developed in this study. The web-based environment is an…

  2. A Web Service and Interface for Remote Electronic Device Characterization

    ERIC Educational Resources Information Center

    Dutta, S.; Prakash, S.; Estrada, D.; Pop, E.

    2011-01-01

    A lightweight Web Service and a Web site interface have been developed, which enable remote measurements of electronic devices as a "virtual laboratory" for undergraduate engineering classes. Using standard browsers without additional plugins (such as Internet Explorer, Firefox, or even Safari on an iPhone), remote users can control a Keithley…

  3. QUEST: An Assessment Tool for Web-Based Learning.

    ERIC Educational Resources Information Center

    Choren, Ricardo; Blois, Marcelo; Fuks, Hugo

    In 1997, the Software Engineering Laboratory at Pontifical Catholic University of Rio de Janeiro (Brazil) implemented the first version of AulaNet (TM) a World Wide Web-based educational environment. Some of the teaching staff will use this environment in 1998 to offer regular term disciplines through the Web. This paper introduces Quest, a tool…

  4. Millennial Undergraduate Research Strategies in Web and Library Information Retrieval Systems

    ERIC Educational Resources Information Center

    Porter, Brandi

    2011-01-01

    This article summarizes the author's dissertation regarding search strategies of millennial undergraduate students in Web and library online information retrieval systems. Millennials bring a unique set of search characteristics and strategies to their research since they have never known a world without the Web. Through the use of search engines,…

  5. Communication Webagogy 2.0: More Click, Less Drag.

    ERIC Educational Resources Information Center

    Radford, Marie L.; Wagner, Kurt W.

    2000-01-01

    Argues that, because of the chaotic nature of the Web and the competing searching software, no single search tool will suffice. Lists and discusses meta search engines; communication meta-sites and subject directories, all indexed by humans; teaching resources for communication courses that utilize the unique features of the Web; and web sites…

  6. Guide to the Internet. The world wide web.

    PubMed Central

    Pallen, M.

    1995-01-01

    The world wide web provides a uniform, user friendly interface to the Internet. Web pages can contain text and pictures and are interconnected by hypertext links. The addresses of web pages are recorded as uniform resource locators (URLs), transmitted by hypertext transfer protocol (HTTP), and written in hypertext markup language (HTML). Programs that allow you to use the web are available for most operating systems. Powerful on line search engines make it relatively easy to find information on the web. Browsing through the web--"net surfing"--is both easy and enjoyable. Contributing to the web is not difficult, and the web opens up new possibilities for electronic publishing and electronic journals. Images p1554-a Fig 5 PMID:8520402

  7. Biotool2Web: creating simple Web interfaces for bioinformatics applications.

    PubMed

    Shahid, Mohammad; Alam, Intikhab; Fuellen, Georg

    2006-01-01

    Currently there are many bioinformatics applications being developed, but there is no easy way to publish them on the World Wide Web. We have developed a Perl script, called Biotool2Web, which makes the task of creating web interfaces for simple ('home-made') bioinformatics applications quick and easy. Biotool2Web uses an XML document containing the parameters to run the tool on the Web, and generates the corresponding HTML and common gateway interface (CGI) files ready to be published on a web server. This tool is available for download at URL http://www.uni-muenster.de/Bioinformatics/services/biotool2web/ Georg Fuellen (fuellen@alum.mit.edu).

  8. There comes a baby! What should I do? Smartphones' pregnancy-related applications: A web-based overview.

    PubMed

    Bert, Fabrizio; Passi, Stefano; Scaioli, Giacomo; Gualano, Maria R; Siliquini, Roberta

    2016-09-01

    Our article aims to give an overview of the most mentioned smartphones' pregnancy-related applications (Apps). A keywords string with selected keywords was entered both in a general search engine (Google(®)) and PubMed. While PubMed returned no pertinent results, a total of 370 web pages were found on Google(®), and 146 of them were selected. All the pregnancy-related Apps cited at least eight times were included. Information about App's producer, price, contents, privacy policy, and presence of a scientific board was collected. Finally, nine apps were considered. The majority of them were free and available in the two main online markets (Apple(®) App Store and Android(®) Google Play). Five apps presented a privacy policy statement, while a scientific board was mentioned in only three. Further studies are needed in order to deepen the knowledge regarding the main risks of these devices, such as privacy loss, contents control concerns, the digital divide and a potential humanization reduction. © The Author(s) 2015.

  9. The poor quality of information about laparoscopy on the World Wide Web as indexed by popular search engines.

    PubMed

    Allen, J W; Finch, R J; Coleman, M G; Nathanson, L K; O'Rourke, N A; Fielding, G A

    2002-01-01

    This study was undertaken to determine the quality of information on the Internet regarding laparoscopy. Four popular World Wide Web search engines were used with the key word "laparoscopy." Advertisements, patient- or physician-directed information, and controversial material were noted. A total of 14,030 Web pages were found, but only 104 were unique Web sites. The majority of the sites were duplicate pages, subpages within a main Web page, or dead links. Twenty-eight of the 104 pages had a medical product for sale, 26 were patient-directed, 23 were written by a physician or group of physicians, and six represented corporations. The remaining 21 were "miscellaneous." The 46 pages containing educational material were critically reviewed. At least one of the senior authors found that 32 of the pages contained controversial or misleading statements. All of the three senior authors (LKN, NAO, GAF) independently agreed that 17 of the 46 pages contained controversial information. The World Wide Web is not a reliable source for patient or physician information about laparoscopy. Authenticating medical information on the World Wide Web is a difficult task, and no government or surgical society has taken the lead in regulating what is presented as fact on the World Wide Web.

  10. Specification Patent Management for Web Application Platform Ecosystem

    NASA Astrophysics Data System (ADS)

    Fukami, Yoshiaki; Isshiki, Masao; Takeda, Hideaki; Ohmukai, Ikki; Kokuryo, Jiro

    Diversified usage of web applications has encouraged disintegration of web platform into management of identification and applications. Users make use of various kinds of data linked to their identity with multiple applications on certain social web platforms such as Facebook or MySpace. There has emerged competition among web application platforms. Platformers can design relationship with developers by controlling patent of their own specification and adopt open technologies developed external organizations. Platformers choose a way to open according to feature of the specification and their position. Patent management of specification come to be a key success factor to build competitive web application platforms. Each way to attract external developers such as standardization, open source has not discussed and analyzed all together.

  11. Ontology to relational database transformation for web application development and maintenance

    NASA Astrophysics Data System (ADS)

    Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful

    2018-03-01

    Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.

  12. Comparison of web-based and face-to-face interviews for application to an anesthesiology training program: a pilot study

    PubMed Central

    Malkin, Mathew R.; Lenart, John; Stier, Gary R.; Gatling, Jason W.; Applegate II, Richard L.

    2016-01-01

    Objectives This study compared admission rates to a United States anesthesiology residency program for applicants completing face-to-face versus web-based interviews during the admissions process. We also explored factors driving applicants to select each interview type. Methods The 211 applicants invited to interview for admission to our anesthesiology residency program during the 2014-2015 application cycle were participants in this pilot observational study. Of these, 141 applicants selected face-to-face interviews, 53 applicants selected web-based interviews, and 17 applicants declined to interview. Data regarding applicants' reasons for selecting a particular interview type were gathered using an anonymous online survey after interview completion. Residency program admission rates and survey answers were compared between applicants completing face-to-face versus web-based interviews. Results One hundred twenty-seven (75.1%) applicants completed face-to-face and 42 (24.9%) completed web-based interviews. The admission rate to our residency program was not significantly different between applicants completing face-to-face versus web-based interviews. One hundred eleven applicants completed post-interview surveys. The most common reasons for selecting web-based interviews were conflict of interview dates between programs, travel concerns, or financial limitations. Applicants selected face-to-face interviews due to a desire to interact with current residents, or geographic proximity to the residency program. Conclusions These results suggest that completion of web-based interviews is a viable alternative to completion of face-to-face interviews, and that choice of interview type does not affect the rate of applicant admission to the residency program. Web-based interviews may be of particular interest to applicants applying to a large number of programs, or with financial limitations. PMID:27039029

  13. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui

    PubMed Central

    2012-01-01

    Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945

  14. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui.

    PubMed

    Newton, Richard; Deonarine, Andrew; Wernisch, Lorenz

    2012-09-24

    The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications.

  15. Digital dissemination platform of transportation engineering education materials.

    DOT National Transportation Integrated Search

    2014-09-01

    National agencies have called for more widespread adoption of best practices in engineering education. To facilitate this sharing of practices we will develop a web-based system that will be used by transportation engineering educators to share curri...

  16. Comparison of quality of internet pages on human papillomavirus immunization in Italian and in English.

    PubMed

    Tozzi, Alberto Eugenio; Buonuomo, Paola Sabrina; Ciofi degli Atti, Marta Luisa; Carloni, Emanuela; Meloni, Marco; Gamba, Fiorenza

    2010-01-01

    Information available on the Internet about immunizations may influence parents' perception about human papillomavirus (HPV) immunization and their attitude toward vaccinating their daughters. We hypothesized that the quality of information on HPV available on the Internet may vary with language and with the level of knowledge of parents. To this end we compared the quality of a sample of Web pages in Italian with a sample of Web pages in English. Five reviewers assessed the quality of Web pages retrieved with popular search engines using criteria adapted from the Good Information Practice Essential Criteria for Vaccine Safety Web Sites recommended by the World Health Organization. Quality of Web pages was assessed in the domains of accessibility, credibility, content, and design. Scores in these domains were compared through nonparametric statistical tests. We retrieved and reviewed 74 Web sites in Italian and 117 in English. Most retrieved Web pages (33.5%) were from private agencies. Median scores were higher in Web pages in English compared with those in Italian in the domain of accessibility (p < .01), credibility (p < .01), and content (p < .01). The highest credibility and content scores were those of Web pages from governmental agencies or universities. Accessibility scores were positively associated with content scores (p < .01) and with credibility scores (p < .01). A total of 16.2% of Web pages in Italian opposed HPV immunization compared with 6.0% of those in English (p < .05). Quality of information and number of Web pages opposing HPV immunization may vary with the Web site language. High-quality Web pages on HPV, especially from public health agencies and universities, should be easily accessible and retrievable with common Web search engines. Copyright 2010 Society for Adolescent Medicine. Published by Elsevier Inc. All rights reserved.

  17. Web Services--A Buzz Word with Potentials

    Treesearch

    János T. Füstös

    2006-01-01

    The simplest definition of a web service is an application that provides a web API. The web API exposes the functionality of the solution to other applications. The web API relies on other Internet-based technologies to manage communications. The resulting web services are pervasive, vendor-independent, language-neutral, and very low-cost. The main purpose of a web API...

  18. Using binary classification to prioritize and curate articles for the Comparative Toxicogenomics Database.

    PubMed

    Vishnyakova, Dina; Pasche, Emilie; Ruch, Patrick

    2012-01-01

    We report on the original integration of an automatic text categorization pipeline, so-called ToxiCat (Toxicogenomic Categorizer), that we developed to perform biomedical documents classification and prioritization in order to speed up the curation of the Comparative Toxicogenomics Database (CTD). The task can be basically described as a binary classification task, where a scoring function is used to rank a selected set of articles. Then components of a question-answering system are used to extract CTD-specific annotations from the ranked list of articles. The ranking function is generated using a Support Vector Machine, which combines three main modules: an information retrieval engine for MEDLINE (EAGLi), a gene normalization service (NormaGene) developed for a previous BioCreative campaign and finally, a set of answering components and entity recognizer for diseases and chemicals. The main components of the pipeline are publicly available both as web application and web services. The specific integration performed for the BioCreative competition is available via a web user interface at http://pingu.unige.ch:8080/Toxicat.

  19. Live video monitoring robot controlled by web over internet

    NASA Astrophysics Data System (ADS)

    Lokanath, M.; Akhil Sai, Guruju

    2017-11-01

    Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.

  20. The Online Bioinformatics Resources Collection at the University of Pittsburgh Health Sciences Library System--a one-stop gateway to online bioinformatics databases and software tools.

    PubMed

    Chen, Yi-Bu; Chattopadhyay, Ansuman; Bergen, Phillip; Gadd, Cynthia; Tannery, Nancy

    2007-01-01

    To bridge the gap between the rising information needs of biological and medical researchers and the rapidly growing number of online bioinformatics resources, we have created the Online Bioinformatics Resources Collection (OBRC) at the Health Sciences Library System (HSLS) at the University of Pittsburgh. The OBRC, containing 1542 major online bioinformatics databases and software tools, was constructed using the HSLS content management system built on the Zope Web application server. To enhance the output of search results, we further implemented the Vivísimo Clustering Engine, which automatically organizes the search results into categories created dynamically based on the textual information of the retrieved records. As the largest online collection of its kind and the only one with advanced search results clustering, OBRC is aimed at becoming a one-stop guided information gateway to the major bioinformatics databases and software tools on the Web. OBRC is available at the University of Pittsburgh's HSLS Web site (http://www.hsls.pitt.edu/guides/genetics/obrc).

  1. Digital Investigations of AN Archaeological Smart Point Cloud: a Real Time Web-Based Platform to Manage the Visualisation of Semantical Queries

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.

    2017-05-01

    While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.

  2. Transforming Systems Engineering through Model Centric Engineering

    DTIC Science & Technology

    2017-08-08

    12 Figure 5. Semantic Web Technologies related to Layers of Abstraction ................................. 23 Figure 6. NASA /JPL Instantiation...of OpenMBEE (circa 2014) ................................................. 24 Figure 7. NASA /JPL Foundational Ontology for Systems Engineering...Engineering (DE) Transformation initiative, and our relationship that we have fostered with National Aeronautics and Space Administration ( NASA ) Jet

  3. Voice-enabled Knowledge Engine using Flood Ontology and Natural Language Processing

    NASA Astrophysics Data System (ADS)

    Sermet, M. Y.; Demir, I.; Krajewski, W. F.

    2015-12-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts, flood-related data, information and interactive visualizations for communities in Iowa. The IFIS is designed for use by general public, often people with no domain knowledge and limited general science background. To improve effective communication with such audience, we have introduced a voice-enabled knowledge engine on flood related issues in IFIS. Instead of navigating within many features and interfaces of the information system and web-based sources, the system provides dynamic computations based on a collection of built-in data, analysis, and methods. The IFIS Knowledge Engine connects to real-time stream gauges, in-house data sources, analysis and visualization tools to answer natural language questions. Our goal is the systematization of data and modeling results on flood related issues in Iowa, and to provide an interface for definitive answers to factual queries. The goal of the knowledge engine is to make all flood related knowledge in Iowa easily accessible to everyone, and support voice-enabled natural language input. We aim to integrate and curate all flood related data, implement analytical and visualization tools, and make it possible to compute answers from questions. The IFIS explicitly implements analytical methods and models, as algorithms, and curates all flood related data and resources so that all these resources are computable. The IFIS Knowledge Engine computes the answer by deriving it from its computational knowledge base. The knowledge engine processes the statement, access data warehouse, run complex database queries on the server-side and return outputs in various formats. This presentation provides an overview of IFIS Knowledge Engine, its unique information interface and functionality as an educational tool, and discusses the future plans for providing knowledge on flood related issues and resources. IFIS Knowledge Engine provides an alternative access method to these comprehensive set of tools and data resources available in IFIS. Current implementation of the system accepts free-form input and voice recognition capabilities within browser and mobile applications.

  4. Creating Web-Based Scientific Applications Using Java Servlets

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Arnold, James O. (Technical Monitor)

    2001-01-01

    There are many advantages to developing web-based scientific applications. Any number of people can access the application concurrently. The application can be accessed from a remote location. The application becomes essentially platform-independent because it can be run from any machine that has internet access and can run a web browser. Maintenance and upgrades to the application are simplified since only one copy of the application exists in a centralized location. This paper details the creation of web-based applications using Java servlets. Java is a powerful, versatile programming language that is well suited to developing web-based programs. A Java servlet provides the interface between the central server and the remote client machines. The servlet accepts input data from the client, runs the application on the server, and sends the output back to the client machine. The type of servlet that supports the HTTP protocol will be discussed in depth. Among the topics the paper will discuss are how to write an http servlet, how the servlet can run applications written in Java and other languages, and how to set up a Java web server. The entire process will be demonstrated by building a web-based application to compute stagnation point heat transfer.

  5. Testing Web Applications with Mutation Analysis

    ERIC Educational Resources Information Center

    Praphamontripong, Upsorn

    2017-01-01

    Web application software uses new technologies that have novel methods for integration and state maintenance that amount to new control flow mechanisms and new variables scoping. While modern web development technologies enhance the capabilities of web applications, they introduce challenges that current testing techniques do not adequately test…

  6. 75 FR 52255 - Airworthiness Directives; Air Tractor, Inc. Models AT-802 and AT-802A Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-25

    ... replacement at whichever Follow Snow Engineering Co. Service caps, the web plates, the center of the following... Engineering Co. Service Letter 80GG, revised December 21, 2005; Snow Engineering Co. Service Letter 284, dated October 4, 2009; Snow Engineering Co. Service Letter 281, dated August 1, 2009; Snow Engineering Co...

  7. Integrated gas turbine engine-nacelle

    NASA Technical Reports Server (NTRS)

    Adamson, A. P.; Sargisson, D. F.; Stotler, C. L., Jr. (Inventor)

    1977-01-01

    A nacelle for use with a gas turbine engine is presented. An integral webbed structure resembling a spoked wheel for rigidly interconnecting the nacelle and engine, provides lightweight support. The inner surface of the nacelle defines the outer limits of the engine motive fluid flow annulus while the outer surface of the nacelle defines a streamlined envelope for the engine.

  8. Build, Buy, Open Source, or Web 2.0?: Making an Informed Decision for Your Library

    ERIC Educational Resources Information Center

    Fagan, Jody Condit; Keach, Jennifer A.

    2010-01-01

    When improving a web presence, today's libraries have a choice: using a free Web 2.0 application, opting for open source, buying a product, or building a web application. This article discusses how to make an informed decision for one's library. The authors stress that deciding whether to use a free Web 2.0 application, to choose open source, to…

  9. Who wrote the "Letter to the Hebrews"?: data mining for detection of text authorship

    NASA Astrophysics Data System (ADS)

    Sabordo, Madeleine; Chai, Shong Y.; Berryman, Matthew J.; Abbott, Derek

    2005-02-01

    This paper explores the authorship of the Letter to the Hebrews using a number of different measures of relationship between different texts of the New Testament. The methods used in the study include file zipping and compression techniques, prediction by the partial matching technique and the word recurrence interval technique. The long term motivation is that the techniques employed in this study may find applicability in future generation web search engines, email authorship identification, detection of plagiarism and terrorist email traffic filtration.

  10. Structural Ceramics Database

    National Institute of Standards and Technology Data Gateway

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  11. The quality of mental health information commonly searched for on the Internet.

    PubMed

    Grohol, John M; Slimowicz, Joseph; Granda, Rebecca

    2014-04-01

    Previous research has reviewed the quality of online information related to specific mental disorders. Yet, no comprehensive study has been conducted on the overall quality of mental health information searched for online. This study examined the first 20 search results of two popular search engines-Google and Bing-for 11 common mental health terms. They were analyzed using the DISCERN instrument, an adaptation of the Depression Website Content Checklist (ADWCC), Flesch Reading Ease and Flesch-Kincaid Grade Level readability measures, HONCode badge display, and commercial status, resulting in an analysis of 440 web pages. Quality of Web site results varied based on type of disorder examined, with higher quality Web sites found for schizophrenia, bipolar disorder, and dysthymia, and lower quality ratings for phobia, anxiety, and panic disorder Web sites. Of the total Web sites analyzed, 67.5% had good or better quality content. Nearly one-third of the search results produced Web sites from three entities: WebMD, Wikipedia, and the Mayo Clinic. The mean Flesch Reading Ease score was 41.21, and the mean Flesch-Kincaid Grade Level score was 11.68. The presence of the HONCode badge and noncommercial status was found to have a small correlation with Web site quality, and Web sites displaying the HONCode badge and commercial sites had lower readability scores. Popular search engines appear to offer generally reliable results pointing to mostly good or better quality mental health Web sites. However, additional work is needed to make these sites more readable.

  12. Cross-Dataset Analysis and Visualization Driven by Expressive Web Services

    NASA Astrophysics Data System (ADS)

    Alexandru Dumitru, Mircea; Catalin Merticariu, Vlad

    2015-04-01

    The deluge of data that is hitting us every day from satellite and airborne sensors is changing the workflow of environmental data analysts and modelers. Web geo-services play now a fundamental role, and are no longer needed to preliminary download and store the data, but rather they interact in real-time with GIS applications. Due to the very large amount of data that is curated and made available by web services, it is crucial to deploy smart solutions for optimizing network bandwidth, reducing duplication of data and moving the processing closer to the data. In this context we have created a visualization application for analysis and cross-comparison of aerosol optical thickness datasets. The application aims to help researchers identify and visualize discrepancies between datasets coming from various sources, having different spatial and time resolutions. It also acts as a proof of concept for integration of OGC Web Services under a user-friendly interface that provides beautiful visualizations of the explored data. The tool was built on top of the World Wind engine, a Java based virtual globe built by NASA and the open source community. For data retrieval and processing we exploited the OGC Web Coverage Service potential: the most exciting aspect being its processing extension, a.k.a. the OGC Web Coverage Processing Service (WCPS) standard. A WCPS-compliant service allows a client to execute a processing query on any coverage offered by the server. By exploiting a full grammar, several different kinds of information can be retrieved from one or more datasets together: scalar condensers, cross-sectional profiles, comparison maps and plots, etc. This combination of technology made the application versatile and portable. As the processing is done on the server-side, we ensured that the minimal amount of data is transferred and that the processing is done on a fully-capable server, leaving the client hardware resources to be used for rendering the visualization. The application offers a set of features to visualize and cross-compare the datasets. Users can select a region of interest in space and time on which an aerosol map layer is plotted. Hovmoeller time-latitude and time-longitude profiles can be displayed by selecting orthogonal cross-sections on the globe. Statistics about the selected dataset are also displayed in different text and plot formats. The datasets can also be cross-compared either by using the delta map tool or the merged map tool. For more advanced users, a WCPS query console is also offered allowing users to process their data with ad-hoc queries and then choose how to display the results. Overall, the user has a rich set of tools that can be used to visualize and cross-compare the aerosol datasets. With our application we have shown how the NASA WorldWind framework can be used to display results processed efficiently - and entirely - on the server side using the expressiveness of the OGC WCPS web-service. The application serves not only as a proof of concept of a new paradigm in working with large geospatial data but also as an useful tool for environmental data analysts.

  13. Web of Science, Scopus, and Google Scholar citation rates: a case study of medical physics and biomedical engineering: what gets cited and what doesn't?

    PubMed

    Trapp, Jamie

    2016-12-01

    There are often differences in a publication's citation count, depending on the database accessed. Here, aspects of citation counts for medical physics and biomedical engineering papers are studied using papers published in the journal Australasian physical and engineering sciences in medicine. Comparison is made between the Web of Science, Scopus, and Google Scholar. Papers are categorised into subject matter, and citation trends are examined. It is shown that review papers as a group tend to receive more citations on average; however the highest cited individual papers are more likely to be research papers.

  14. WebGLORE: a web service for Grid LOgistic REgression.

    PubMed

    Jiang, Wenchao; Li, Pinghao; Wang, Shuang; Wu, Yuan; Xue, Meng; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-12-15

    WebGLORE is a free web service that enables privacy-preserving construction of a global logistic regression model from distributed datasets that are sensitive. It only transfers aggregated local statistics (from participants) through Hypertext Transfer Protocol Secure to a trusted server, where the global model is synthesized. WebGLORE seamlessly integrates AJAX, JAVA Applet/Servlet and PHP technologies to provide an easy-to-use web service for biomedical researchers to break down policy barriers during information exchange. http://dbmi-engine.ucsd.edu/webglore3/. WebGLORE can be used under the terms of GNU general public license as published by the Free Software Foundation.

  15. Query-Structure Based Web Page Indexing

    DTIC Science & Technology

    2012-11-01

    the massive amount of data present on the web. In our third participation in the web track at TREC 2012, we explore the idea of building an...the ad-hoc and diversity task. 1 INTRODUCTION The rapid growth and massive quantities of data on the Internet have increased the importance and...complexity of information retrieval systems. The amount and the diversity of the web data introduce shortcomings in the way search engines rank their

  16. How To Succeed in Promoting Your Web Site: The Impact of Search Engine Registration on Retrieval of a World Wide Web Site.

    ERIC Educational Resources Information Center

    Tunender, Heather; Ervin, Jane

    1998-01-01

    Character strings were planted in a World Wide Web site (Project Whistlestop) to test indexing and retrieval rates of five Web search tools (Lycos, infoseek, AltaVista, Yahoo, Excite). It was found that search tools indexed few of the planted character strings, none indexed the META descriptor tag, and only Excite indexed into the 3rd-4th site…

  17. Talking Trash on the Internet: Working Real Data into Your Classroom.

    ERIC Educational Resources Information Center

    Lynch, Maurice P.; Walton, Susan A.

    1998-01-01

    Describes how a middle school teacher used the Chesapeake Bay National Estuarine Research Reserve in Virginia (CBNERRVA) Web site to provide scientific data for a unit on recycling. Includes sample data sheets and tables, charts results of a Web search for marine debris using different search engines, and lists selected marine data Web sites. (PEN)

  18. 47 CFR 73.8000 - Incorporation by reference.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Engineering and Technology (OET) Web site: http://www.fcc.gov/oet/info/documents/bulletins/. (1) OET Bulletin...., Suite 1200, Washington, DC 20006, or at the ATSC Web site: http://www.atsc.org/standards.html. (1) ATSC... Standards Institute (ANSI), 25 West 43rd Street, 4th Floor, New York, NY 10036 or at the ANSI Web site: http...

  19. In Search of a Better Search Engine

    ERIC Educational Resources Information Center

    Kolowich, Steve

    2009-01-01

    Early this decade, the number of Web-based documents stored on the servers of the University of Florida hovered near 300,000. By the end of 2006, that number had leapt to four million. Two years later, the university hosts close to eight million Web documents. Web sites for colleges and universities everywhere have become repositories for data…

  20. Searching the Web: The Public and Their Queries.

    ERIC Educational Resources Information Center

    Spink, Amanda; Wolfram, Dietmar; Jansen, Major B. J.; Saracevic, Tefko

    2001-01-01

    Reports findings from a study of searching behavior by over 200,000 users of the Excite search engine. Analysis of over one million queries revealed most people use few search terms, few modified queries, view few Web pages, and rarely use advanced search features. Concludes that Web searching by the public differs significantly from searching of…

  1. Users' Perceptions of the Web As Revealed by Transaction Log Analysis.

    ERIC Educational Resources Information Center

    Moukdad, Haidar; Large, Andrew

    2001-01-01

    Describes the results of a transaction log analysis of a Web search engine, WebCrawler, to analyze user's queries for information retrieval. Results suggest most users do not employ advanced search features, and the linguistic structure often resembles a human-human communication model that is not always successful in human-computer communication.…

  2. A Portrait of the Audience for Instruction in Web Searching: Results of a Survey Conducted at Two Canadian Universities.

    ERIC Educational Resources Information Center

    Tillotson, Joy

    2003-01-01

    Describes a survey that was conducted involving participants in the library instruction program at two Canadian universities in order to describe the characteristics of students receiving instruction in Web searching. Examines criteria for evaluating Web sites, search strategies, use of search engines, and frequency of use. Questionnaire is…

  3. Development, dissemination, and applications of a new terminological resource, the Q-Code taxonomy for professional aspects of general practice/family medicine.

    PubMed

    Jamoulle, Marc; Resnick, Melissa; Grosjean, Julien; Ittoo, Ashwin; Cardillo, Elena; Vander Stichele, Robert; Darmoni, Stefan; Vanmeerbeek, Marc

    2018-12-01

    While documentation of clinical aspects of General Practice/Family Medicine (GP/FM) is assured by the International Classification of Primary Care (ICPC), there is no taxonomy for the professional aspects (context and management) of GP/FM. To present the development, dissemination, applications, and resulting face validity of the Q-Codes taxonomy specifically designed to describe contextual features of GP/FM, proposed as an extension to the ICPC. The Q-Codes taxonomy was developed from Lamberts' seminal idea for indexing contextual content (1987) by a multi-disciplinary team of knowledge engineers, linguists and general practitioners, through a qualitative and iterative analysis of 1702 abstracts from six GP/FM conferences using Atlas.ti software. A total of 182 concepts, called Q-Codes, representing professional aspects of GP/FM were identified and organized in a taxonomy. Dissemination: The taxonomy is published as an online terminological resource, using semantic web techniques and web ontology language (OWL) ( http://www.hetop.eu/Q ). Each Q-Code is identified with a unique resource identifier (URI), and provided with preferred terms, and scope notes in ten languages (Portuguese, Spanish, English, French, Dutch, Korean, Vietnamese, Turkish, Georgian, German) and search filters for MEDLINE and web searches. This taxonomy has already been used to support queries in bibliographic databases (e.g., MEDLINE), to facilitate indexing of grey literature in GP/FM as congress abstracts, master theses, websites and as an educational tool in vocational teaching, Conclusions: The rapidly growing list of practical applications provides face-validity for the usefulness of this freely available new terminological resource.

  4. Designing Industrial Networks Using Ecological Food Web Metrics.

    PubMed

    Layton, Astrid; Bras, Bert; Weissburg, Marc

    2016-10-18

    Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.

  5. Automatic document classification of biological literature

    PubMed Central

    Chen, David; Müller, Hans-Michael; Sternberg, Paul W

    2006-01-01

    Background Document classification is a wide-spread problem with many applications, from organizing search engine snippets to spam filtering. We previously described Textpresso, a text-mining system for biological literature, which marks up full text according to a shallow ontology that includes terms of biological interest. This project investigates document classification in the context of biological literature, making use of the Textpresso markup of a corpus of Caenorhabditis elegans literature. Results We present a two-step text categorization algorithm to classify a corpus of C. elegans papers. Our classification method first uses a support vector machine-trained classifier, followed by a novel, phrase-based clustering algorithm. This clustering step autonomously creates cluster labels that are descriptive and understandable by humans. This clustering engine performed better on a standard test-set (Reuters 21578) compared to previously published results (F-value of 0.55 vs. 0.49), while producing cluster descriptions that appear more useful. A web interface allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. Conclusion We have demonstrated a simple method to classify biological documents that embodies an improvement over current methods. While the classification results are currently optimized for Caenorhabditis elegans papers by human-created rules, the classification engine can be adapted to different types of documents. We have demonstrated this by presenting a web interface that allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. PMID:16893465

  6. Development of grid-like applications for public health using Web 2.0 mashup techniques.

    PubMed

    Scotch, Matthew; Yip, Kevin Y; Cheung, Kei-Hoi

    2008-01-01

    Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals. In this case report, we describe the development and use of Web 2.0 technologies including Yahoo! Pipes within a public health application that integrates animal, human, and temperature data to assess the risk of West Nile Virus (WNV) outbreaks. The results of development and testing suggest that while Web 2.0 applications are reasonable environments for rapid prototyping, they are not mature enough for large-scale public health data applications. The application, in fact a "systems of systems," often failed due to varied timeouts for application response across web sites and services, internal caching errors, and software added to web sites by administrators to manage the load on their servers. In spite of these concerns, the results of this study demonstrate the potential value of grid computing and Web 2.0 approaches in public health informatics.

  7. PaaS for web applications with OpenShift Origin

    NASA Astrophysics Data System (ADS)

    Lossent, A.; Rodriguez Peon, A.; Wagner, A.

    2017-10-01

    The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.

  8. WIRM: An Open Source Toolkit for Building Biomedical Web Applications

    PubMed Central

    Jakobovits, Rex M.; Rosse, Cornelius; Brinkley, James F.

    2002-01-01

    This article describes an innovative software toolkit that allows the creation of web applications that facilitate the acquisition, integration, and dissemination of multimedia biomedical data over the web, thereby reducing the cost of knowledge sharing. There is a lack of high-level web application development tools suitable for use by researchers, clinicians, and educators who are not skilled programmers. Our Web Interfacing Repository Manager (WIRM) is a software toolkit that reduces the complexity of building custom biomedical web applications. WIRM’s visual modeling tools enable domain experts to describe the structure of their knowledge, from which WIRM automatically generates full-featured, customizable content management systems. PMID:12386108

  9. Web 2.0 Applications in China

    NASA Astrophysics Data System (ADS)

    Zhai, Dongsheng; Liu, Chen

    Since 2005, the term Web 2.0 has gradually become a hot topic on the Internet. Web 2.0 lets users create web contents as distinct from webmasters or web coders. Web 2.0 has come to our work, our life and even has become an indispensable part of our web-life. Its applications have already been widespread in many fields on the Internet. So far, China has about 137 million netizens [1], therefore its Web 2.0 market is so attractive that many sources of venture capital flow into the Chinese Web 2.0 market and there are also a lot of new Web 2.0 companies in China. However, the development of Web 2.0 in China is accompanied by some problems and obstacles. In this paper, we will mainly discuss Web 2.0 applications in China, with their current problems and future development trends.

  10. Context-Aware Online Commercial Intention Detection

    NASA Astrophysics Data System (ADS)

    Hu, Derek Hao; Shen, Dou; Sun, Jian-Tao; Yang, Qiang; Chen, Zheng

    With more and more commercial activities moving onto the Internet, people tend to purchase what they need through Internet or conduct some online research before the actual transactions happen. For many Web users, their online commercial activities start from submitting a search query to search engines. Just like the common Web search queries, the queries with commercial intention are usually very short. Recognizing the queries with commercial intention against the common queries will help search engines provide proper search results and advertisements, help Web users obtain the right information they desire and help the advertisers benefit from the potential transactions. However, the intentions behind a query vary a lot for users with different background and interest. The intentions can even be different for the same user, when the query is issued in different contexts. In this paper, we present a new algorithm framework based on skip-chain conditional random field (SCCRF) for automatically classifying Web queries according to context-based online commercial intention. We analyze our algorithm performance both theoretically and empirically. Extensive experiments on several real search engine log datasets show that our algorithm can improve more than 10% on F1 score than previous algorithms on commercial intention detection.

  11. 75 FR 26321 - Seventeenth Plenary Meeting: RTCA Special Committee 203: Unmanned Aircraft Systems

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-11

    ... RTCA Workspace Web Tool Special Committee Status Overview Workgroup Updates WG1--Systems Engineering..., Washington, DC 20036; telephone (202) 833-9339; fax (202) 833-9434; Web site http://www.rtca.org...

  12. FireProt: web server for automated design of thermostable proteins

    PubMed Central

    Musil, Milos; Stourac, Jan; Brezovsky, Jan; Prokop, Zbynek; Zendulka, Jaroslav; Martinek, Tomas

    2017-01-01

    Abstract There is a continuous interest in increasing proteins stability to enhance their usability in numerous biomedical and biotechnological applications. A number of in silico tools for the prediction of the effect of mutations on protein stability have been developed recently. However, only single-point mutations with a small effect on protein stability are typically predicted with the existing tools and have to be followed by laborious protein expression, purification, and characterization. Here, we present FireProt, a web server for the automated design of multiple-point thermostable mutant proteins that combines structural and evolutionary information in its calculation core. FireProt utilizes sixteen tools and three protein engineering strategies for making reliable protein designs. The server is complemented with interactive, easy-to-use interface that allows users to directly analyze and optionally modify designed thermostable mutants. FireProt is freely available at http://loschmidt.chemi.muni.cz/fireprot. PMID:28449074

  13. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs

    PubMed Central

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410

  14. AMP: A platform for managing and mining data in the treatment of Autism Spectrum Disorder.

    PubMed

    Linstead, Erik; Burns, Ryan; Duy Nguyen; Tyler, David

    2016-08-01

    We introduce AMP (Autism Management Platform), an integrated health care information system for capturing, analyzing, and managing data associated with the diagnosis and treatment of Autism Spectrum Disorder in children. AMP's mobile application simplifies the means by which parents, guardians, and clinicians can collect and share multimedia data with one another, facilitating communication and reducing data redundancy, while simplifying retrieval. Additionally, AMP provides an intelligent web interface and analytics platform which allow physicians and specialists to aggregate and mine patient data in real-time, as well as give relevant feedback to automatically learn data filtering preferences over time. Together AMP's mobile app, web client, and analytics engine implement a rich set of features that streamline the data collection and analysis process in the context of a secure and easy-to-use system so that data may be more effectively leveraged to guide treatment.

  15. Intelligent web image retrieval system

    NASA Astrophysics Data System (ADS)

    Hong, Sungyong; Lee, Chungwoo; Nah, Yunmook

    2001-07-01

    Recently, the web sites such as e-business sites and shopping mall sites deal with lots of image information. To find a specific image from these image sources, we usually use web search engines or image database engines which rely on keyword only retrievals or color based retrievals with limited search capabilities. This paper presents an intelligent web image retrieval system. We propose the system architecture, the texture and color based image classification and indexing techniques, and representation schemes of user usage patterns. The query can be given by providing keywords, by selecting one or more sample texture patterns, by assigning color values within positional color blocks, or by combining some or all of these factors. The system keeps track of user's preferences by generating user query logs and automatically add more search information to subsequent user queries. To show the usefulness of the proposed system, some experimental results showing recall and precision are also explained.

  16. The Number of Scholarly Documents on the Public Web

    PubMed Central

    Khabsa, Madian; Giles, C. Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%. PMID:24817403

  17. The number of scholarly documents on the public web.

    PubMed

    Khabsa, Madian; Giles, C Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.

  18. Taming the Information Jungle with WWW Search Engines.

    ERIC Educational Resources Information Center

    Repman, Judi; And Others

    1997-01-01

    Because searching the Web with different engines often produces different results, the best strategy is to learn how each engine works. Discusses comparing search engines; qualities to consider (ease of use, relevance of hits, and speed); and six of the most popular search tools (Yahoo, Magellan. InfoSeek, Alta Vista, Lycos, and Excite). Lists…

  19. Internet Search Engines - Fluctuations in Document Accessibility.

    ERIC Educational Resources Information Center

    Mettrop, Wouter; Nieuwenhuysen, Paul

    2001-01-01

    Reports an empirical investigation of the consistency of retrieval through Internet search engines. Evaluates 13 engines: AltaVista, EuroFerret, Excite, HotBot, InfoSeek, Lycos, MSN, NorthernLight, Snap, WebCrawler, and three national Dutch engines: Ilse, Search.nl and Vindex. The focus is on a characteristic related to size: the degree of…

  20. IdentiPy: An Extensible Search Engine for Protein Identification in Shotgun Proteomics.

    PubMed

    Levitsky, Lev I; Ivanov, Mark V; Lobas, Anna A; Bubis, Julia A; Tarasova, Irina A; Solovyeva, Elizaveta M; Pridatchenko, Marina L; Gorshkov, Mikhail V

    2018-06-18

    We present an open-source, extensible search engine for shotgun proteomics. Implemented in Python programming language, IdentiPy shows competitive processing speed and sensitivity compared with the state-of-the-art search engines. It is equipped with a user-friendly web interface, IdentiPy Server, enabling the use of a single server installation accessed from multiple workstations. Using a simplified version of X!Tandem scoring algorithm and its novel "autotune" feature, IdentiPy outperforms the popular alternatives on high-resolution data sets. Autotune adjusts the search parameters for the particular data set, resulting in improved search efficiency and simplifying the user experience. IdentiPy with the autotune feature shows higher sensitivity compared with the evaluated search engines. IdentiPy Server has built-in postprocessing and protein inference procedures and provides graphic visualization of the statistical properties of the data set and the search results. It is open-source and can be freely extended to use third-party scoring functions or processing algorithms and allows customization of the search workflow for specialized applications.

Top