Applying Web Usage Mining for Personalizing Hyperlinks in Web-Based Adaptive Educational Systems
ERIC Educational Resources Information Center
Romero, Cristobal; Ventura, Sebastian; Zafra, Amelia; de Bra, Paul
2009-01-01
Nowadays, the application of Web mining techniques in e-learning and Web-based adaptive educational systems is increasing exponentially. In this paper, we propose an advanced architecture for a personalization system to facilitate Web mining. A specific Web mining tool is developed and a recommender engine is integrated into the AHA! system in…
Curating the Web: Building a Google Custom Search Engine for the Arts
ERIC Educational Resources Information Center
Hennesy, Cody; Bowman, John
2008-01-01
Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…
Features: Real-Time Adaptive Feature and Document Learning for Web Search.
ERIC Educational Resources Information Center
Chen, Zhixiang; Meng, Xiannong; Fowler, Richard H.; Zhu, Binhai
2001-01-01
Describes Features, an intelligent Web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Explains how Features learns from users' document relevance feedback and automatically extracts and suggests indexing keywords relevant to a search query, and learns from users' keyword relevance feedback…
EngineSim: Turbojet Engine Simulator Adapted for High School Classroom Use
NASA Technical Reports Server (NTRS)
Petersen, Ruth A.
2001-01-01
EngineSim is an interactive educational computer program that allows users to explore the effect of engine operation on total aircraft performance. The software is supported by a basic propulsion web site called the Beginner's Guide to Propulsion, which includes educator-created, web-based activities for the classroom use of EngineSim. In addition, educators can schedule videoconferencing workshops in which EngineSim's creator demonstrates the software and discusses its use in the educational setting. This software is a product of NASA Glenn Research Center's Learning Technologies Project, an educational outreach initiative within the High Performance Computing and Communications Program.
AdaFF: Adaptive Failure-Handling Framework for Composite Web Services
NASA Astrophysics Data System (ADS)
Kim, Yuna; Lee, Wan Yeon; Kim, Kyong Hoon; Kim, Jong
In this paper, we propose a novel Web service composition framework which dynamically accommodates various failure recovery requirements. In the proposed framework called Adaptive Failure-handling Framework (AdaFF), failure-handling submodules are prepared during the design of a composite service, and some of them are systematically selected and automatically combined with the composite Web service at service instantiation in accordance with the requirement of individual users. In contrast, existing frameworks cannot adapt the failure-handling behaviors to user's requirements. AdaFF rapidly delivers a composite service supporting the requirement-matched failure handling without manual development, and contributes to a flexible composite Web service design in that service architects never care about failure handling or variable requirements of users. For proof of concept, we implement a prototype system of the AdaFF, which automatically generates a composite service instance with Web Services Business Process Execution Language (WS-BPEL) according to the users' requirement specified in XML format and executes the generated instance on the ActiveBPEL engine.
Raul, Pramod R; Pagilla, Prabhakar R
2015-05-01
In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Zadahmad, Manouchehr; Yousefzadehfard, Parisa
2016-01-01
Mobile Cloud Computing (MCC) aims to improve all mobile applications such as m-learning systems. This study presents an innovative method to use web technology and software engineering's best practices to provide m-learning functionalities hosted in a MCC-learning system as service. Components hosted by MCC are used to empower developers to create…
Easy Web Interfaces to IDL Code for NSTX Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
W.M. Davis
Reusing code is a well-known Software Engineering practice to substantially increase the efficiency of code production, as well as to reduce errors and debugging time. A variety of "Web Tools" for the analysis and display of raw and analyzed physics data are in use on NSTX [1], and new ones can be produced quickly from existing IDL [2] code. A Web Tool with only a few inputs, and which calls an IDL routine written in the proper style, can be created in less than an hour; more typical Web Tools with dozens of inputs, and the need for some adaptationmore » of existing IDL code, can be working in a day or so. Efficiency is also increased for users of Web Tools because o f the familiar interface of the web browser, and not needing X-windows, accounts, passwords, etc. Web Tools were adapted for use by PPPL physicists accessing EAST data stored in MDSplus with only a few man-weeks of effort; adapting to additional sites should now be even easier. An overview of Web Tools in use on NSTX, and a list of the most useful features, is also presented.« less
Section 5: Adapting Requirements Practices in Different Domains
NASA Astrophysics Data System (ADS)
Robinson, William
Technology has a tremendous impact on society. In recent years, the Internet, World Wide Web, and Web 2.0 has changed the nature of commerce, government, and of course software development. It affects the practices of producing requirements and as well as the kinds of systems to be designed. The effect of converging technologies on the role of requirements engineering is considered in the first article by Matthias Jarke, while the effect of technology on requirements practices is considered in the second article by Walt Scacchi. Together, they provide theoretical and practical perspective on requirements engineering issues faced in a modern, technology driven world.
NASA Astrophysics Data System (ADS)
Clark, P. E.; Rilee, M. L.; Curtis, S. A.; Bailin, S.
2012-03-01
We are developing Frontier, a highly adaptable, stably reconfigurable, web-accessible intelligent decision engine capable of optimizing design as well as the simulating operation of complex systems in response to evolving needs and environment.
Adaptive Semantic and Social Web-based learning and assessment environment for the STEM
NASA Astrophysics Data System (ADS)
Babaie, Hassan; Atchison, Chris; Sunderraman, Rajshekhar
2014-05-01
We are building a cloud- and Semantic Web-based personalized, adaptive learning environment for the STEM fields that integrates and leverages Social Web technologies to allow instructors and authors of learning material to collaborate in semi-automatic development and update of their common domain and task ontologies and building their learning resources. The semi-automatic ontology learning and development minimize issues related to the design and maintenance of domain ontologies by knowledge engineers who do not have any knowledge of the domain. The social web component of the personal adaptive system will allow individual and group learners to interact with each other and discuss their own learning experience and understanding of course material, and resolve issues related to their class assignments. The adaptive system will be capable of representing key knowledge concepts in different ways and difficulty levels based on learners' differences, and lead to different understanding of the same STEM content by different learners. It will adapt specific pedagogical strategies to individual learners based on their characteristics, cognition, and preferences, allow authors to assemble remotely accessed learning material into courses, and provide facilities for instructors to assess (in real time) the perception of students of course material, monitor their progress in the learning process, and generate timely feedback based on their understanding or misconceptions. The system applies a set of ontologies that structure the learning process, with multiple user friendly Web interfaces. These include the learning ontology (models learning objects, educational resources, and learning goal); context ontology (supports adaptive strategy by detecting student situation), domain ontology (structures concepts and context), learner ontology (models student profile, preferences, and behavior), task ontologies, technological ontology (defines devices and places that surround the student), pedagogy ontology, and learner ontology (defines time constraint, comment, profile).
NASA Astrophysics Data System (ADS)
2012-07-01
WE RECOMMEND Data logger Fourier NOVA LINK: data logging and analysis To Engineer is Human Engineering: essays and insights Soap, Science, & Flat-Screen TVs People, politics, business and science overlap uLog sensors and sensor adapter A new addition to the LogIT range offers simplicity and ease of use WORTH A LOOK Imagined Worlds Socio-scientific predictions for the future Mini light data logger and mini temperature data logger Small-scale equipment for schools SensorLab Plus LogIT's supporting software, with extra features HANDLE WITH CARE CAXE110P PICAXE-18M2 data logger Data logger 'on view' but disappoints Engineering: A Very Short Introduction A broad-brush treatment fails to satisfy WEB WATCH Two very different websites for students: advanced physics questions answered and a more general BBC science resource
Millstone: software for multiplex microbial genome analysis and engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodman, Daniel B.; Kuznetsov, Gleb; Lajoie, Marc J.
Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. Here, we describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.
Millstone: software for multiplex microbial genome analysis and engineering.
Goodman, Daniel B; Kuznetsov, Gleb; Lajoie, Marc J; Ahern, Brian W; Napolitano, Michael G; Chen, Kevin Y; Chen, Changping; Church, George M
2017-05-25
Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. We describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.
Millstone: software for multiplex microbial genome analysis and engineering
Goodman, Daniel B.; Kuznetsov, Gleb; Lajoie, Marc J.; ...
2017-05-25
Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. Here, we describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.
The quality of mental health information commonly searched for on the Internet.
Grohol, John M; Slimowicz, Joseph; Granda, Rebecca
2014-04-01
Previous research has reviewed the quality of online information related to specific mental disorders. Yet, no comprehensive study has been conducted on the overall quality of mental health information searched for online. This study examined the first 20 search results of two popular search engines-Google and Bing-for 11 common mental health terms. They were analyzed using the DISCERN instrument, an adaptation of the Depression Website Content Checklist (ADWCC), Flesch Reading Ease and Flesch-Kincaid Grade Level readability measures, HONCode badge display, and commercial status, resulting in an analysis of 440 web pages. Quality of Web site results varied based on type of disorder examined, with higher quality Web sites found for schizophrenia, bipolar disorder, and dysthymia, and lower quality ratings for phobia, anxiety, and panic disorder Web sites. Of the total Web sites analyzed, 67.5% had good or better quality content. Nearly one-third of the search results produced Web sites from three entities: WebMD, Wikipedia, and the Mayo Clinic. The mean Flesch Reading Ease score was 41.21, and the mean Flesch-Kincaid Grade Level score was 11.68. The presence of the HONCode badge and noncommercial status was found to have a small correlation with Web site quality, and Web sites displaying the HONCode badge and commercial sites had lower readability scores. Popular search engines appear to offer generally reliable results pointing to mostly good or better quality mental health Web sites. However, additional work is needed to make these sites more readable.
Font adaptive word indexing of modern printed documents.
Marinai, Simone; Marino, Emanuele; Soda, Giovanni
2006-08-01
We propose an approach for the word-level indexing of modern printed documents which are difficult to recognize using current OCR engines. By means of word-level indexing, it is possible to retrieve the position of words in a document, enabling queries involving proximity of terms. Web search engines implement this kind of indexing, allowing users to retrieve Web pages on the basis of their textual content. Nowadays, digital libraries hold collections of digitized documents that can be retrieved either by browsing the document images or relying on appropriate metadata assembled by domain experts. Word indexing tools would therefore increase the access to these collections. The proposed system is designed to index homogeneous document collections by automatically adapting to different languages and font styles without relying on OCR engines for character recognition. The approach is based on three main ideas: the use of Self Organizing Maps (SOM) to perform unsupervised character clustering, the definition of one suitable vector-based word representation whose size depends on the word aspect-ratio, and the run-time alignment of the query word with indexed words to deal with broken and touching characters. The most appropriate applications are for processing modern printed documents (17th to 19th centuries) where current OCR engines are less accurate. Our experimental analysis addresses six data sets containing documents ranging from books of the 17th century to contemporary journals.
Generic Service Integration in Adaptive Learning Experiences Using IMS Learning Design
ERIC Educational Resources Information Center
de-la-Fuente-Valentin, Luis; Pardo, Abelardo; Kloos, Carlos Delgado
2011-01-01
IMS Learning Design is a specification to capture the orchestration taking place in a learning scenario. This paper presents an extension called Generic Service Integration. This paradigm allows a bidirectional communication between the course engine in charge of the orchestration and conventional Web 2.0 tools. This communication allows the…
Tozzi, Alberto Eugenio; Buonuomo, Paola Sabrina; Ciofi degli Atti, Marta Luisa; Carloni, Emanuela; Meloni, Marco; Gamba, Fiorenza
2010-01-01
Information available on the Internet about immunizations may influence parents' perception about human papillomavirus (HPV) immunization and their attitude toward vaccinating their daughters. We hypothesized that the quality of information on HPV available on the Internet may vary with language and with the level of knowledge of parents. To this end we compared the quality of a sample of Web pages in Italian with a sample of Web pages in English. Five reviewers assessed the quality of Web pages retrieved with popular search engines using criteria adapted from the Good Information Practice Essential Criteria for Vaccine Safety Web Sites recommended by the World Health Organization. Quality of Web pages was assessed in the domains of accessibility, credibility, content, and design. Scores in these domains were compared through nonparametric statistical tests. We retrieved and reviewed 74 Web sites in Italian and 117 in English. Most retrieved Web pages (33.5%) were from private agencies. Median scores were higher in Web pages in English compared with those in Italian in the domain of accessibility (p < .01), credibility (p < .01), and content (p < .01). The highest credibility and content scores were those of Web pages from governmental agencies or universities. Accessibility scores were positively associated with content scores (p < .01) and with credibility scores (p < .01). A total of 16.2% of Web pages in Italian opposed HPV immunization compared with 6.0% of those in English (p < .05). Quality of information and number of Web pages opposing HPV immunization may vary with the Web site language. High-quality Web pages on HPV, especially from public health agencies and universities, should be easily accessible and retrievable with common Web search engines. Copyright 2010 Society for Adolescent Medicine. Published by Elsevier Inc. All rights reserved.
Towards Agile Ontology Maintenance
NASA Astrophysics Data System (ADS)
Luczak-Rösch, Markus
Ontologies are an appropriate means to represent knowledge on the Web. Research on ontology engineering reached practices for an integrative lifecycle support. However, a broader success of ontologies in Web-based information systems remains unreached while the more lightweight semantic approaches are rather successful. We assume, paired with the emerging trend of services and microservices on the Web, new dynamic scenarios gain momentum in which a shared knowledge base is made available to several dynamically changing services with disparate requirements. Our work envisions a step towards such a dynamic scenario in which an ontology adapts to the requirements of the accessing services and applications as well as the user's needs in an agile way and reduces the experts' involvement in ontology maintenance processes.
Miles Discusses Skylab Experiment With NASA Personnel
NASA Technical Reports Server (NTRS)
1972-01-01
Lexington, Massachusetts high school student, Judith Miles, discusses her proposed Skylab experiment with engineers and scientists during a design review of the experiment equipment. At left is Ron Pavlue of Kennedy Space Flight Center (KSC), holding a box is Keith Demorest of Marshall Space Flight Center (MSFC). Right of Miles is Dr. Raymond Gause, also of MSFC, who is Miles' scientific advisor. In her experiment, called the 'Web Formation in Zero Gravity', spiders were released into a box and their actions recorded to determine how well they adapt to the absence of gravity. Spiders are known to adapt quickly to other changes in the environment but nothing was known of their ability to adapt to weightlessness. At the same time spiders were weaving webs in Earth orbit, similar spiders were spinning webs in identical boxes on Earth under full gravity conditions. Miles was among the 25 winners of a contest in which some 3,500 high school students proposed experiments for the following year's Skylab mission. Of the 25 students, 6 did not see their experiments conducted on Skylab because the experiments were not compatible with Skylab hardware and timelines. Of the 19 remaining, 11 experiments required the manufacture of equipment.
1972-08-21
Lexington, Massachusetts high school student, Judith Miles, discusses her proposed Skylab experiment with engineers and scientists during a design review of the experiment equipment. At left is Ron Pavlue of Kennedy Space Flight Center (KSC), holding a box is Keith Demorest of Marshall Space Flight Center (MSFC). Right of Miles is Dr. Raymond Gause, also of MSFC, who is Miles’ scientific advisor. In her experiment, called the “Web Formation in Zero Gravity”, spiders were released into a box and their actions recorded to determine how well they adapt to the absence of gravity. Spiders are known to adapt quickly to other changes in the environment but nothing was known of their ability to adapt to weightlessness. At the same time spiders were weaving webs in Earth orbit, similar spiders were spinning webs in identical boxes on Earth under full gravity conditions. Miles was among the 25 winners of a contest in which some 3,500 high school students proposed experiments for the following year’s Skylab mission. Of the 25 students, 6 did not see their experiments conducted on Skylab because the experiments were not compatible with Skylab hardware and timelines. Of the 19 remaining, 11 experiments required the manufacture of equipment.
HydroViz: design and evaluation of a Web-based tool for improving hydrology education
NASA Astrophysics Data System (ADS)
Habib, E.; Ma, Y.; Williams, D.; Sharif, H. O.; Hossain, F.
2012-10-01
HydroViz is a Web-based, student-centered, educational tool designed to support active learning in the field of Engineering Hydrology. The design of HydroViz is guided by a learning model that is based on learning with data and simulations, using real-world natural hydrologic systems to convey theoretical concepts, and using Web-based technologies for dissemination of the hydrologic education developments. This model, while being used in a hydrologic education context, can be adapted in other engineering educational settings. HydroViz leverages the free Google Earth resources to enable presentation of geospatial data layers and embed them in web pages that have the same look and feel of Google Earth. These design features significantly facilitate the dissemination and adoption of HydroViz by any interested educational institutions regardless of their access to data or computer models. To facilitate classroom usage, HydroViz is populated with a set of course modules that can be used incrementally within different stages of an engineering hydrology curriculum. A pilot evaluation study was conducted to determine the effectiveness of the HydroViz tool in delivering its educational content, to examine the buy-in of the program by faculty and students, and to identify specific project components that need to be further pursued and improved. A total of 182 students from seven freshmen and senior-level undergraduate classes in three universities participated in the study. HydroViz was effective in facilitating students' learning and understanding of hydrologic concepts and increasing related skills. Students had positive perceptions of various features of HydroViz and they believe that HydroViz fits well in the curriculum. In general, HydroViz tend to be more effective with students in senior-level classes than students in freshmen classes. Lessons gained from this pilot study provide guidance for future adaptation and expansion studies to scale-up the application and utility of HydroViz and other similar systems into various hydrology and water-resource engineering curriculum settings. The paper presents a set of design principles that contribute to the development of other active hydrology educational systems.
Fast segmentation of satellite images using SLIC, WebGL and Google Earth Engine
NASA Astrophysics Data System (ADS)
Donchyts, Gennadii; Baart, Fedor; Gorelick, Noel; Eisemann, Elmar; van de Giesen, Nick
2017-04-01
Google Earth Engine (GEE) is a parallel geospatial processing platform, which harmonizes access to petabytes of freely available satellite images. It provides a very rich API, allowing development of dedicated algorithms to extract useful geospatial information from these images. At the same time, modern GPUs provide thousands of computing cores, which are mostly not utilized in this context. In the last years, WebGL became a popular and well-supported API, allowing fast image processing directly in web browsers. In this work, we will evaluate the applicability of WebGL to enable fast segmentation of satellite images. A new implementation of a Simple Linear Iterative Clustering (SLIC) algorithm using GPU shaders will be presented. SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It adapts a k-means clustering approach to generate superpixels efficiently. While this approach will be hard to scale, due to a significant amount of data to be transferred to the client, it should significantly improve exploratory possibilities and simplify development of dedicated algorithms for geoscience applications. Our prototype implementation will be used to improve surface water detection of the reservoirs using multispectral satellite imagery.
NASA Technical Reports Server (NTRS)
Laakso, J. H.; Zimmerman, D. K.
1972-01-01
An advanced composite shear web design concept was developed for the Space Shuttle orbiter main engine thrust beam structure. Various web concepts were synthesized by a computer-aided adaptive random search procedure. A practical concept is identified having a titanium-clad + or - 45 deg boron/epoxy web plate with vertical boron/epoxy reinforced aluminum stiffeners. The boron-epoxy laminate contributes to the strength and stiffness efficiency of the basic web section. The titanium-cladding functions to protect the polymeric laminate parts from damaging environments and is chem-milled to provide reinforcement in selected areas. Detailed design drawings are presented for both boron/epoxy reinforced and all-metal shear webs. The weight saving offered is 24% relative to all-metal construction at an attractive cost per pound of weight saved, based on the detailed designs. Small scale element tests substantiate the boron/epoxy reinforced design details in critical areas. The results show that the titanium-cladding reliably reinforces the web laminate in critical edge load transfer and stiffener fastener hole areas.
Web impact factor: a bibliometric criterion applied to medical informatics societies' web sites.
Soualmia, Lina Fatima; Darmoni, Stéfan Jacques; Le Duff, Franck; Douyere, Magaly; Thelwall, Maurice
2002-01-01
Several methods are available to evaluate and compare medical journals. The most popular is the journal Impact Factor, derived from averaging counts of citations to articles. Ingwersen adapted this method to assess the attractiveness of Web sites, defining the external Web Impact Factor (WIF) to be the number of external pages containing a link to a given Web site. This paper applies the WIF to 43 medical informatics societies' Web sites using advanced search engine queries to obtain the necessary link counts. The WIF was compared to the number of publications available in the Medline bibliographic database in medical informatics in these 43 countries. Between these two metrics, the observed Pearson correlation was 0.952 (p < 0.01) and the Spearman rank correlation was 0.548 (p < 0.01) showing in both cases a positive and strong significant correlation. The WIF of medicalm informatics society's Web site is statistically related to national productivity and discrepancies can be used to indicate countries where there are either weak medical informatics associations, or ones that do not make optimal use of the Web.
Semantic similarity measure in biomedical domain leverage web search engine.
Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei
2010-01-01
Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.
The Use of Web Search Engines in Information Science Research.
ERIC Educational Resources Information Center
Bar-Ilan, Judit
2004-01-01
Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…
Adding a visualization feature to web search engines: it's time.
Wong, Pak Chung
2008-01-01
It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.
Health search engine with e-document analysis for reliable search results.
Gaudinat, Arnaud; Ruch, Patrick; Joubert, Michel; Uziel, Philippe; Strauss, Anne; Thonnet, Michèle; Baud, Robert; Spahni, Stéphane; Weber, Patrick; Bonal, Juan; Boyer, Celia; Fieschi, Marius; Geissbuhler, Antoine
2006-01-01
After a review of the existing practical solution available to the citizen to retrieve eHealth document, the paper describes an original specialized search engine WRAPIN. WRAPIN uses advanced cross lingual information retrieval technologies to check information quality by synthesizing medical concepts, conclusions and references contained in the health literature, to identify accurate, relevant sources. Thanks to MeSH terminology [1] (Medical Subject Headings from the U.S. National Library of Medicine) and advanced approaches such as conclusion extraction from structured document, reformulation of the query, WRAPIN offers to the user a privileged access to navigate through multilingual documents without language or medical prerequisites. The results of an evaluation conducted on the WRAPIN prototype show that results of the WRAPIN search engine are perceived as informative 65% (59% for a general-purpose search engine), reliable and trustworthy 72% (41% for the other engine) by users. But it leaves room for improvement such as the increase of database coverage, the explanation of the original functionalities and an audience adaptability. Thanks to evaluation outcomes, WRAPIN is now in exploitation on the HON web site (http://www.healthonnet.org), free of charge. Intended to the citizen it is a good alternative to general-purpose search engines when the user looks up trustworthy health and medical information or wants to check automatically a doubtful content of a Web page.
Interactive effects of body-size structure and adaptive foraging on food-web stability.
Heckmann, Lotta; Drossel, Barbara; Brose, Ulrich; Guill, Christian
2012-03-01
Body-size structure of food webs and adaptive foraging of consumers are two of the dominant concepts of our understanding how natural ecosystems maintain their stability and diversity. The interplay of these two processes, however, is a critically important yet unresolved issue. To fill this gap in our knowledge of ecosystem stability, we investigate dynamic random and niche model food webs to evaluate the proportion of persistent species. We show that stronger body-size structures and faster adaptation stabilise these food webs. Body-size structures yield stabilising configurations of interaction strength distributions across food webs, and adaptive foraging emphasises links to resources closer to the base. Moreover, both mechanisms combined have a cumulative effect. Most importantly, unstructured random webs evolve via adaptive foraging into stable size-structured food webs. This offers a mechanistic explanation of how size structure adaptively emerges in complex food webs, thus building a novel bridge between these two important stabilising mechanisms. © 2012 Blackwell Publishing Ltd/CNRS.
Designing a Pedagogical Model for Web Engineering Education: An Evolutionary Perspective
ERIC Educational Resources Information Center
Hadjerrouit, Said
2005-01-01
In contrast to software engineering, which relies on relatively well established development approaches, there is a lack of a proven methodology that guides Web engineers in building reliable and effective Web-based systems. Currently, Web engineering lacks process models, architectures, suitable techniques and methods, quality assurance, and a…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-12
...-FF08EACT00] Trinity Adaptive Management Working Group; Public Meeting, Teleconference and Web-Based Meeting... Service, announce a public meeting, teleconference and web-based meeting of the Trinity Adaptive Management Working Group (TAMWG). DATES: Public meeting, Teleconference, and web-based meeting: Tuesday June...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-13
...-FF08EACT00] Trinity Adaptive Management Working Group; Public Meeting, Teleconference and Web-Based Meeting... Service, announce a public meeting, teleconference, and web-based meeting of the Trinity Adaptive Management Working Group (TAMWG). DATES: Public meeting, Teleconference, and web-based meeting: Tuesday...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
...-FF08EACT00] Trinity Adaptive Management Working Group; Public Meeting, Teleconference and Web-Based Meeting... Service, announce a public meeting, teleconference and web-based meeting of the Trinity Adaptive Management Working Group (TAMWG). DATES: Public meeting, Teleconference, and web-based meeting: Monday April...
Integrating Engineering Data Systems for NASA Spaceflight Projects
NASA Technical Reports Server (NTRS)
Carvalho, Robert E.; Tollinger, Irene; Bell, David G.; Berrios, Daniel C.
2012-01-01
NASA has a large range of custom-built and commercial data systems to support spaceflight programs. Some of the systems are re-used by many programs and projects over time. Management and systems engineering processes require integration of data across many of these systems, a difficult problem given the widely diverse nature of system interfaces and data models. This paper describes an ongoing project to use a central data model with a web services architecture to support the integration and access of linked data across engineering functions for multiple NASA programs. The work involves the implementation of a web service-based middleware system called Data Aggregator to bring together data from a variety of systems to support space exploration. Data Aggregator includes a central data model registry for storing and managing links between the data in disparate systems. Initially developed for NASA's Constellation Program needs, Data Aggregator is currently being repurposed to support the International Space Station Program and new NASA projects with processes that involve significant aggregating and linking of data. This change in user needs led to development of a more streamlined data model registry for Data Aggregator in order to simplify adding new project application data as well as standardization of the Data Aggregator query syntax to facilitate cross-application querying by client applications. This paper documents the approach from a set of stand-alone engineering systems from which data are manually retrieved and integrated, to a web of engineering data systems from which the latest data are automatically retrieved and more quickly and accurately integrated. This paper includes the lessons learned through these efforts, including the design and development of a service-oriented architecture and the evolution of the data model registry approaches as the effort continues to evolve and adapt to support multiple NASA programs and priorities.
ERIC Educational Resources Information Center
Mitsuhara, Hiroyuki; Kurose, Yoshinobu; Ochi, Youji; Yano, Yoneo
The authors developed a Web-based Adaptive Educational System (Web-based AES) named ITMS (Individualized Teaching Material System). ITMS adaptively integrates knowledge on the distributed Web pages and generates individualized teaching material that has various contents. ITMS also presumes the learners' knowledge levels from the states of their…
Lawrence; Giles
1998-04-03
The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the "indexable Web," the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages.
ERIC Educational Resources Information Center
Medina-Dominguez, Fuensanta; Sanchez-Segura, Maria-Isabel; Mora-Soto, Arturo; Amescua, Antonio
2010-01-01
The development of collaborative Web applications does not follow a software engineering methodology. This is because when university students study Web applications in general, and collaborative Web portals in particular, they are not being trained in the use of software engineering techniques to develop collaborative Web portals. This paper…
Engineering Analysis Using a Web-based Protocol
NASA Technical Reports Server (NTRS)
Schoeffler, James D.; Claus, Russell W.
2002-01-01
This paper reviews the development of a web-based framework for engineering analysis. A one-dimensional, high-speed analysis code called LAPIN was used in this study, but the approach can be generalized to any engineering analysis tool. The web-based framework enables users to store, retrieve, and execute an engineering analysis from a standard web-browser. We review the encapsulation of the engineering data into the eXtensible Markup Language (XML) and various design considerations in the storage and retrieval of application data.
Realtime Data to Enable Earth-Observing Sensor Web Capabilities
NASA Astrophysics Data System (ADS)
Seablom, M. S.
2015-12-01
Over the past decade NASA's Earth Science Technology Office (ESTO) has invested in new technologies for information systems to enhance the Earth-observing capabilities of satellites, aircraft, and ground-based in situ observations. One focus area has been to create a common infrastructure for coordinated measurements from multiple vantage points which could be commanded either manually or through autonomous means, such as from a numerical model. This paradigm became known as the sensor web, formally defined to be "a coherent set of heterogeneous, loosely-coupled, distributed observing nodes interconnected by a communications fabric that can collectively behave as a single dynamically adaptive and reconfigurable observing system". This would allow for adaptive targeting of rapidly evolving, transient, or variable meteorological features to improve our ability to monitor, understand, and predict their evolution. It would also enable measurements earmarked at critical regions of the atmosphere that are highly sensitive to data analysis errors, thus offering the potential for significant improvements in the predictive skill of numerical weather forecasts. ESTO's investment strategy was twofold. Recognizing that implementation of an operational sensor web would not only involve technical cost and risk but also would require changes to the culture of how flight missions were designed and operated, ESTO funded the development of a mission-planning simulator that would quantitatively assess the added value of coordinated observations. The simulator was designed to provide the capability to perform low-cost engineering and design trade studies using synthetic data generated by observing system simulation experiments (OSSEs). The second part of the investment strategy was to invest in prototype applications that implemented key features of a sensor web, with the dual goals of developing a sensor web reference architecture as well as supporting useful science activities that would produce immediate benefit. We briefly discuss three of ESTO's sensor web projects that resulted from solicitations released in 2008 and 2011: the Earth System Sensor Web Simulator, the Earth Phenomena Observing System, and the Sensor Web 3G Namibia Flood Pilot.
Cyber Moat: Adaptive Virtualized Network Framework for Deception and Disinformation
2016-12-12
As one type of bots, web crawlers have been leveraged by search engines (e.g., Googlebot by Google) to popularize websites through website indexing...However, the number of malicious bots is increasing too. To regulate the behavior of crawlers, most websites include a file called "robots.txt" that...However, "robots.txt" only provides a guideline, and almost all malicious robots ignore it. Moreover, since this file is publicly available, malicious
Flow Webs: Mechanism and Architecture for the Implementation of Sensor Webs
NASA Astrophysics Data System (ADS)
Gorlick, M. M.; Peng, G. S.; Gasster, S. D.; McAtee, M. D.
2006-12-01
The sensor web is a distributed, federated infrastructure much like its predecessors, the internet and the world wide web. It will be a federation of many sensor webs, large and small, under many distinct spans of control, that loosely cooperates and share information for many purposes. Realistically, it will grow piecemeal as distinct, individual systems are developed and deployed, some expressly built for a sensor web while many others were created for other purposes. Therefore, the architecture of the sensor web is of fundamental import and architectural strictures that inhibit innovation, experimentation, sharing or scaling may prove fatal. Drawing upon the architectural lessons of the world wide web, we offer a novel system architecture, the flow web, that elevates flows, sequences of messages over a domain of interest and constrained in both time and space, to a position of primacy as a dynamic, real-time, medium of information exchange for computational services. The flow web captures; in a single, uniform architectural style; the conflicting demands of the sensor web including dynamic adaptations to changing conditions, ease of experimentation, rapid recovery from the failures of sensors and models, automated command and control, incremental development and deployment, and integration at multiple levels—in many cases, at different times. Our conception of sensor webs—dynamic amalgamations of sensor webs each constructed within a flow web infrastructure—holds substantial promise for earth science missions in general, and of weather, air quality, and disaster management in particular. Flow webs, are by philosophy, design and implementation a dynamic infrastructure that permits massive adaptation in real-time. Flows may be attached to and detached from services at will, even while information is in transit through the flow. This concept, flow mobility, permits dynamic integration of earth science products and modeling resources in response to real-time demands. Flows are the connective tissue of flow webs—massive computational engines organized as directed graphs whose nodes are semi-autonomous components and whose edges are flows. The individual components of a flow web may themselves be encapsulated flow webs. In other words, a flow web subgraph may be presented to a yet larger flow web as a single, seamless component. Flow webs, at all levels, may be edited and modified while still executing. Within a flow web individual components may be added, removed, started, paused, halted, reparameterized, or inspected. The topology of a flow web may be changed at will. Thus, flow webs exhibit an extraordinary degree of adaptivity and robustness as they are explicitly designed to be modified on the fly, an attribute well suited for dynamic model interactions in sensor webs. We describe our concept for a sensor web, implemented as a flow web, in the context of a wildfire disaster management system for the southern California region. Comprehensive wildfire management requires cooperation among multiple agencies. Flow webs allow agencies to share resources in exactly the manner they choose. We will explain how to employ flow webs and agents to integrate satellite remote sensing data, models, in-situ sensors, UAVs and other resources into a sensor web that interconnects organizations and their disaster management tools in a manner that simultaneously preserves their independence and builds upon the individual strengths of agency-specific models and data sources.
Proposition and Organization of an Adaptive Learning Domain Based on Fusion from the Web
ERIC Educational Resources Information Center
Chaoui, Mohammed; Laskri, Mohamed Tayeb
2013-01-01
The Web allows self-navigated education through interaction with large amounts of Web resources. While enjoying the flexibility of Web tools, authors may suffer from research and filtering Web resources, when they face various resources formats and complex structures. An adaptation of extracted Web resources must be assured by authors, to give…
A study of medical and health queries to web search engines.
Spink, Amanda; Yang, Yin; Jansen, Jim; Nykanen, Pirrko; Lorence, Daniel P; Ozmutlu, Seda; Ozmutlu, H Cenk
2004-03-01
This paper reports findings from an analysis of medical or health queries to different web search engines. We report results: (i). comparing samples of 10000 web queries taken randomly from 1.2 million query logs from the AlltheWeb.com and Excite.com commercial web search engines in 2001 for medical or health queries, (ii). comparing the 2001 findings from Excite and AlltheWeb.com users with results from a previous analysis of medical and health related queries from the Excite Web search engine for 1997 and 1999, and (iii). medical or health advice-seeking queries beginning with the word 'should'. Findings suggest: (i). a small percentage of web queries are medical or health related, (ii). the top five categories of medical or health queries were: general health, weight issues, reproductive health and puberty, pregnancy/obstetrics, and human relationships, and (iii). over time, the medical and health queries may have declined as a proportion of all web queries, as the use of specialized medical/health websites and e-commerce-related queries has increased. Findings provide insights into medical and health-related web querying and suggests some implications for the use of the general web search engines when seeking medical/health information.
Web Search Studies: Multidisciplinary Perspectives on Web Search Engines
NASA Astrophysics Data System (ADS)
Zimmer, Michael
Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.
Web Spam, Social Propaganda and the Evolution of Search Engine Rankings
NASA Astrophysics Data System (ADS)
Metaxas, Panagiotis Takis
Search Engines have greatly influenced the way we experience the web. Since the early days of the web, users have been relying on them to get informed and make decisions. When the web was relatively small, web directories were built and maintained using human experts to screen and categorize pages according to their characteristics. By the mid 1990's, however, it was apparent that the human expert model of categorizing web pages does not scale. The first search engines appeared and they have been evolving ever since, taking over the role that web directories used to play.
None Available
2018-02-06
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
Comparison of Physics Frameworks for WebGL-Based Game Engine
NASA Astrophysics Data System (ADS)
Yogya, Resa; Kosala, Raymond
2014-03-01
Recently, a new technology called WebGL shows a lot of potentials for developing games. However since this technology is still new, there are still many potentials in the game development area that are not explored yet. This paper tries to uncover the potential of integrating physics frameworks with WebGL technology in a game engine for developing 2D or 3D games. Specifically we integrated three open source physics frameworks: Bullet, Cannon, and JigLib into a WebGL-based game engine. Using experiment, we assessed these frameworks in terms of their correctness or accuracy, performance, completeness and compatibility. The results show that it is possible to integrate open source physics frameworks into a WebGLbased game engine, and Bullet is the best physics framework to be integrated into the WebGL-based game engine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None Available
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
Creating a Classroom Kaleidoscope with the World Wide Web.
ERIC Educational Resources Information Center
Quinlan, Laurie A.
1997-01-01
Discusses the elements of classroom Web presentations: planning; construction, including design tips; classroom use; and assessment. Lists 14 World Wide Web resources for K-12 teachers; Internet search tools (directories, search engines and meta-search engines); a Web glossary; and an example of HTML for a simple Web page. (PEN)
Situated mathematics teaching within electrical engineering courses
NASA Astrophysics Data System (ADS)
Hennig, Markus; Mertsching, Bärbel; Hilkenmeier, Frederic
2015-11-01
The initial phase of undergraduate engineering degree programmes often comprises courses requiring mathematical expertise which in some cases clearly exceeds school mathematics, but will be imparted only later in mathematics courses. In this article, an approach addressing this challenge by way of example within a fundamentals of electrical engineering course is presented. The concept focuses on gaining specific mathematical knowledge and competencies in the technical context of this course. For this purpose, a complementary blended learning scenario centring around a web-based learning platform and involving an adaptation of the course was developed. The concept particularly considers the heterogeneity of today's student groups and is discussed with regard to related approaches, didactical considerations, and technical implementation. For the interventions, the results of a questionnaire-based evaluation proving students' acceptance and positive influence on examination performance are presented.
The Evolution of Web Searching.
ERIC Educational Resources Information Center
Green, David
2000-01-01
Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…
NASA Astrophysics Data System (ADS)
Brambilla, Marco; Ceri, Stefano; Valle, Emanuele Della; Facca, Federico M.; Tziviskou, Christina
Although Semantic Web Services are expected to produce a revolution in the development of Web-based systems, very few enterprise-wide design experiences are available; one of the main reasons is the lack of sound Software Engineering methods and tools for the deployment of Semantic Web applications. In this chapter, we present an approach to software development for the Semantic Web based on classical Software Engineering methods (i.e., formal business process development, computer-aided and component-based software design, and automatic code generation) and on semantic methods and tools (i.e., ontology engineering, semantic service annotation and discovery).
2016-07-21
Todays internet has multiple webs. The surface web is what Google and other search engines index and pull based on links. Essentially, the surface...financial records, research and development), and personal data (medical records or legal documents). These are all deep web. Standard search engines dont
Web Feet Guide to Search Engines: Finding It on the Net.
ERIC Educational Resources Information Center
Web Feet, 2001
2001-01-01
This guide to search engines for the World Wide Web discusses selecting the right search engine; interpreting search results; major search engines; online tutorials and guides; search engines for kids; specialized search tools for various subjects; and other specialized engines and gateways. (LRW)
Semantic similarity measures in the biomedical domain by leveraging a web search engine.
Hsieh, Sheau-Ling; Chang, Wen-Yung; Chen, Chi-Huang; Weng, Yung-Ching
2013-07-01
Various researches in web related semantic similarity measures have been deployed. However, measuring semantic similarity between two terms remains a challenging task. The traditional ontology-based methodologies have a limitation that both concepts must be resided in the same ontology tree(s). Unfortunately, in practice, the assumption is not always applicable. On the other hand, if the corpus is sufficiently adequate, the corpus-based methodologies can overcome the limitation. Now, the web is a continuous and enormous growth corpus. Therefore, a method of estimating semantic similarity is proposed via exploiting the page counts of two biomedical concepts returned by Google AJAX web search engine. The features are extracted as the co-occurrence patterns of two given terms P and Q, by querying P, Q, as well as P AND Q, and the web search hit counts of the defined lexico-syntactic patterns. These similarity scores of different patterns are evaluated, by adapting support vector machines for classification, to leverage the robustness of semantic similarity measures. Experimental results validating against two datasets: dataset 1 provided by A. Hliaoutakis; dataset 2 provided by T. Pedersen, are presented and discussed. In dataset 1, the proposed approach achieves the best correlation coefficient (0.802) under SNOMED-CT. In dataset 2, the proposed method obtains the best correlation coefficient (SNOMED-CT: 0.705; MeSH: 0.723) with physician scores comparing with measures of other methods. However, the correlation coefficients (SNOMED-CT: 0.496; MeSH: 0.539) with coder scores received opposite outcomes. In conclusion, the semantic similarity findings of the proposed method are close to those of physicians' ratings. Furthermore, the study provides a cornerstone investigation for extracting fully relevant information from digitizing, free-text medical records in the National Taiwan University Hospital database.
Dynamics of a macroscopic model characterizing mutualism of search engines and web sites
NASA Astrophysics Data System (ADS)
Wang, Yuanshi; Wu, Hong
2006-05-01
We present a model to describe the mutualism relationship between search engines and web sites. In the model, search engines and web sites benefit from each other while the search engines are derived products of the web sites and cannot survive independently. Our goal is to show strategies for the search engines to survive in the internet market. From mathematical analysis of the model, we show that mutualism does not always result in survival. We show various conditions under which the search engines would tend to extinction, persist or grow explosively. Then by the conditions, we deduce a series of strategies for the search engines to survive in the internet market. We present conditions under which the initial number of consumers of the search engines has little contribution to their persistence, which is in agreement with the results in previous works. Furthermore, we show novel conditions under which the initial value plays an important role in the persistence of the search engines and deduce new strategies. We also give suggestions for the web sites to cooperate with the search engines in order to form a win-win situation.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-31
...-FF08EACT00] Trinity Adaptive Management Working Group; Public Meeting, Teleconference and Web-Based Meeting... Trinity Management Council (TMC). DATES: Public meeting, Teleconference, and web-based meeting: TAMWG and..., Douglas City, CA 96024. You may participate in person or by teleconference or web-based meeting from your...
IntegromeDB: an integrated system and biological search engine.
Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia
2012-01-19
With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.
Indexing and Retrieval for the Web.
ERIC Educational Resources Information Center
Rasmussen, Edie M.
2003-01-01
Explores current research on indexing and ranking as retrieval functions of search engines on the Web. Highlights include measuring search engine stability; evaluation of Web indexing and retrieval; Web crawlers; hyperlinks for indexing and ranking; ranking for metasearch; document structure; citation indexing; relevance; query evaluation;…
BPELPower—A BPEL execution engine for geospatial web services
NASA Astrophysics Data System (ADS)
Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi
2012-10-01
The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.
A user-oriented web crawler for selectively acquiring online content in e-health research.
Xu, Songhua; Yoon, Hong-Jun; Tourassi, Georgia
2014-01-01
Life stories of diseased and healthy individuals are abundantly available on the Internet. Collecting and mining such online content can offer many valuable insights into patients' physical and emotional states throughout the pre-diagnosis, diagnosis, treatment and post-treatment stages of the disease compared with those of healthy subjects. However, such content is widely dispersed across the web. Using traditional query-based search engines to manually collect relevant materials is rather labor intensive and often incomplete due to resource constraints in terms of human query composition and result parsing efforts. The alternative option, blindly crawling the whole web, has proven inefficient and unaffordable for e-health researchers. We propose a user-oriented web crawler that adaptively acquires user-desired content on the Internet to meet the specific online data source acquisition needs of e-health researchers. Experimental results on two cancer-related case studies show that the new crawler can substantially accelerate the acquisition of highly relevant online content compared with the existing state-of-the-art adaptive web crawling technology. For the breast cancer case study using the full training set, the new method achieves a cumulative precision between 74.7 and 79.4% after 5 h of execution till the end of the 20-h long crawling session as compared with the cumulative precision between 32.8 and 37.0% using the peer method for the same time period. For the lung cancer case study using the full training set, the new method achieves a cumulative precision between 56.7 and 61.2% after 5 h of execution till the end of the 20-h long crawling session as compared with the cumulative precision between 29.3 and 32.4% using the peer method. Using the reduced training set in the breast cancer case study, the cumulative precision of our method is between 44.6 and 54.9%, whereas the cumulative precision of the peer method is between 24.3 and 26.3%; for the lung cancer case study using the reduced training set, the cumulative precisions of our method and the peer method are, respectively, between 35.7 and 46.7% versus between 24.1 and 29.6%. These numbers clearly show a consistently superior accuracy of our method in discovering and acquiring user-desired online content for e-health research. The implementation of our user-oriented web crawler is freely available to non-commercial users via the following Web site: http://bsec.ornl.gov/AdaptiveCrawler.shtml. The Web site provides a step-by-step guide on how to execute the web crawler implementation. In addition, the Web site provides the two study datasets including manually labeled ground truth, initial seeds and the crawling results reported in this article.
Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration
NASA Technical Reports Server (NTRS)
Lin, Risheng; Afjeh, Abdollah A.
2003-01-01
This paper discusses the detailed design of an XML databinding framework for aircraft engine simulation. The framework provides an object interface to access and use engine data. while at the same time preserving the meaning of the original data. The Language independent representation of engine component data enables users to move around XML data using HTTP through disparate networks. The application of this framework is demonstrated via a web-based turbofan propulsion system simulation using the World Wide Web (WWW). A Java Servlet based web component architecture is used for rendering XML engine data into HTML format and dealing with input events from the user, which allows users to interact with simulation data from a web browser. The simulation data can also be saved to a local disk for archiving or to restart the simulation at a later time.
Strategies for Adapting WebQuests for Students with Learning Disabilities
ERIC Educational Resources Information Center
Skylar, Ashley A.; Higgins, Kyle; Boone, Randall
2007-01-01
WebQuests are gaining popularity as teachers explore using the Internet for guided learning activities. A WebQuest involves students working on a task that is broken down into clearly defined steps. Students often work in groups to actively conduct the research. This article suggests a variety of methods for adapting WebQuests for students with…
IntegromeDB: an integrated system and biological search engine
2012-01-01
Background With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Description Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. Conclusions The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback. PMID:22260095
Faculty Recommendations for Web Tools: Implications for Course Management Systems
ERIC Educational Resources Information Center
Oliver, Kevin; Moore, John
2008-01-01
A gap analysis of web tools in Engineering was undertaken as one part of the Digital Library Network for Engineering and Technology (DLNET) grant funded by NSF (DUE-0085849). DLNET represents a Web portal and an online review process to archive quality knowledge objects in Engineering and Technology disciplines. The gap analysis coincided with the…
Helping Students Choose Tools To Search the Web.
ERIC Educational Resources Information Center
Cohen, Laura B.; Jacobson, Trudi E.
2000-01-01
Describes areas where faculty members can aid students in making intelligent use of the Web in their research. Differentiates between subject directories and search engines. Describes an engine's three components: spider, index, and search engine. Outlines two misconceptions: that Yahoo! is a search engine and that search engines contain all the…
Spatial Audio on the Web: Or Why Can't I hear Anything Over There?
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Schlickenmaier, Herbert (Technical Monitor); Johnson, Gerald (Technical Monitor); Frey, Mary Anne (Technical Monitor); Schneider, Victor S. (Technical Monitor); Ahunada, Albert J. (Technical Monitor)
1997-01-01
Auditory complexity, freedom of movement and interactivity is not always possible in a "true" virtual environment, much less in web-based audio. However, a lot of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to spatial audio on the web. My talk will discuss some of these engineering constraints and their perceptual consequences, and attempt to relate these issues to implementation on the web.
MIRASS: medical informatics research activity support system using information mashup network.
Kiah, M L M; Zaidan, B B; Zaidan, A A; Nabi, Mohamed; Ibraheem, Rabiu
2014-04-01
The advancement of information technology has facilitated the automation and feasibility of online information sharing. The second generation of the World Wide Web (Web 2.0) enables the collaboration and sharing of online information through Web-serving applications. Data mashup, which is considered a Web 2.0 platform, plays an important role in information and communication technology applications. However, few ideas have been transformed into education and research domains, particularly in medical informatics. The creation of a friendly environment for medical informatics research requires the removal of certain obstacles in terms of search time, resource credibility, and search result accuracy. This paper considers three glitches that researchers encounter in medical informatics research; these glitches include the quality of papers obtained from scientific search engines (particularly, Web of Science and Science Direct), the quality of articles from the indices of these search engines, and the customizability and flexibility of these search engines. A customizable search engine for trusted resources of medical informatics was developed and implemented through data mashup. Results show that the proposed search engine improves the usability of scientific search engines for medical informatics. Pipe search engine was found to be more efficient than other engines.
Food-web complexity emerging from ecological dynamics on adaptive networks.
Garcia-Domingo, Josep L; Saldaña, Joan
2007-08-21
Food webs are complex networks describing trophic interactions in ecological communities. Since Robert May's seminal work on random structured food webs, the complexity-stability debate is a central issue in ecology: does network complexity increase or decrease food-web persistence? A multi-species predator-prey model incorporating adaptive predation shows that the action of ecological dynamics on the topology of a food web (whose initial configuration is generated either by the cascade model or by the niche model) render, when a significant fraction of adaptive predators is present, similar hyperbolic complexity-persistence relationships as those observed in empirical food webs. It is also shown that the apparent positive relation between complexity and persistence in food webs generated under the cascade model, which has been pointed out in previous papers, disappears when the final connection is used instead of the initial one to explain species persistence.
KnowledgePuzzle: A Browsing Tool to Adapt the Web Navigation Process to the Learner's Mental Model
ERIC Educational Resources Information Center
AlAgha, Iyad
2012-01-01
This article presents KnowledgePuzzle, a browsing tool for knowledge construction from the web. It aims to adapt the structure of web content to the learner's information needs regardless of how the web content is originally delivered. Learners are provided with a meta-cognitive space (e.g., a concept mapping tool) that enables them to plan…
The Effect of Individual Differences on Searching the Web.
ERIC Educational Resources Information Center
Ihadjadene, Madjid; Chaudiron, Stephanne,; Martins, Daniel
2003-01-01
Reports results from a project that investigated the influence of two types of expertise--knowledge of the search domain and experience of the Web search engines--on students' use of a Web search engine. Results showed participants with good knowledge in the domain and participants with high experience of the Web had the best performances. (AEF)
NASA Astrophysics Data System (ADS)
Demir, I.; Sermet, M. Y.
2016-12-01
Nobody is immune from extreme events or natural hazards that can lead to large-scale consequences for the nation and public. One of the solutions to reduce the impacts of extreme events is to invest in improving resilience with the ability to better prepare, plan, recover, and adapt to disasters. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This abstracts presents our project on developing a resilience framework for flooding to improve societal preparedness with objectives; (a) develop a generalized ontology for extreme events with primary focus on flooding; (b) develop a knowledge engine with voice recognition, artificial intelligence, natural language processing, and inference engine. The knowledge engine will utilize the flood ontology and concepts to connect user input to relevant knowledge discovery outputs on flooding; (c) develop a data acquisition and processing framework from existing environmental observations, forecast models, and social networks. The system will utilize the framework, capabilities and user base of the Iowa Flood Information System (IFIS) to populate and test the system; (d) develop a communication framework to support user interaction and delivery of information to users. The interaction and delivery channels will include voice and text input via web-based system (e.g. IFIS), agent-based bots (e.g. Microsoft Skype, Facebook Messenger), smartphone and augmented reality applications (e.g. smart assistant), and automated web workflows (e.g. IFTTT, CloudWork) to open the knowledge discovery for flooding to thousands of community extensible web workflows.
HydroViz: evaluation of a web-based tool for improving hydrology education
NASA Astrophysics Data System (ADS)
Habib, E.; Ma, Y.; Williams, D.; Sharif, H.; Hossain, F.
2012-02-01
HydroViz is a web-based, student-centered, highly visual educational tool designed to support active learning in the field of Engineering Hydrology. The development of HydroViz is informed by recent advances in hydrologic data, numerical simulations, visualization and web-based technologies. An evaluation study was conducted to determine the effectiveness of HydroViz, to examine the buy-in of the program, and to identify project components that need to be improved. A total of 182 students from seven freshmen and junior-/senior-level undergraduate classes in three universities participated in the study over the course of two semesters (spring 2010 and fall 2010). Data sources included homework assignments, online surveys, and informal interviews with students. Descriptive statistics were calculated for homework and the survey. Qualitative analysis of students' comments and informal interview notes were also conducted to identify ideas and patterns. HydroViz was effective in facilitating students' learning and understanding of hydrologic concepts and increasing related skills. Students had positive perceptions of various features of HydroViz and they believe that HydroViz fits well in the curriculum. The experience with HydroViz was somewhat effective in raising freshmen civil engineering students' interest in hydrology. In general, HydroViz tend to be more effective with students in junior- or senior-level classes than students in freshmen classes. There does not seem to be obvious differences between different universities. Students identified some issues that can be addressed to improve HydroViz. Future adaptation and expansion studies are under planning to scale-up the application and utility of HydroViz into various hydrology and water-resource engineering curriculum settings.
An Adaptive Web-Based Support to e-Education in Robotics and Automation
NASA Astrophysics Data System (ADS)
di Giamberardino, Paolo; Temperini, Marco
The paper presents the hardware and software architecture of a remote laboratory, with robotics and automation applications, devised to support e-teaching and e-learning activities, at an undergraduate level in computer engineering. The hardware is composed by modular structures, based on the Lego Mindstorms components: they are reasonably sophisticated in terms of functions, pretty easy to use, and sufficiently affordable in terms of cost. Moreover, being the robots intrinsically modular, wrt the number and distribution of sensors and actuators, they are easily and quickly reconfigurable. A web application makes the laboratory and its robots available via internet. The software framework allows the teacher to define, for the course under her/his responsibility, a learning path made of different and differently complex exercises, graduated in terms of the "difficulty" they require to meet and of the "competence" that the solver is supposed to have shown. The learning path of exercises is adapted to the individual learner's progressively growing competence: at any moment, only a subset of the exercises is available (depending on how close their levels of competence and difficulty are to those of the exercises already solved by the learner).
The Web: Can We Make It Easier To Find Information?
ERIC Educational Resources Information Center
Maddux, Cleborne D.
1999-01-01
Reviews problems with the World Wide Web that can be attributed to human error or ineptitude, and provides suggestions for improvement. Discusses poor Web design, poor use of search engines, and poor quality control by search engines and directories. (AEF)
Adaptive User Model for Web-Based Learning Environment.
ERIC Educational Resources Information Center
Garofalakis, John; Sirmakessis, Spiros; Sakkopoulos, Evangelos; Tsakalidis, Athanasios
This paper describes the design of an adaptive user model and its implementation in an advanced Web-based Virtual University environment that encompasses combined and synchronized adaptation between educational material and well-known communication facilities. The Virtual University environment has been implemented to support a postgraduate…
How to Boost Engineering Support Via Web 2.0 - Seeds for the Ares Project...and/or Yours?
NASA Technical Reports Server (NTRS)
Scott, David W.
2010-01-01
The Mission Operations Laboratory (MOL) at Marshall Space Flight Center (MSFC) is responsible for Engineering Support capability for NASA s Ares launch system development. In pursuit of this, MOL is building the Ares Engineering and Operations Network (AEON), a web-based portal intended to provide a seamless interface to support and simplify two critical activities: a) Access and analyze Ares manufacturing, test, and flight performance data, with access to Shuttle data for comparison. b) Provide archive storage for engineering instrumentation data to support engineering design, development, and test. A mix of NASA-written and COTS software provides engineering analysis tools. A by-product of using a data portal to access and display data is access to collaborative tools inherent in a Web 2.0 environment. This paper discusses how Web 2.0 techniques, particularly social media, might be applied to the traditionally conservative and formal engineering support arena. A related paper by the author [1] considers use
ERIC Educational Resources Information Center
Lakonpol, Thongmee; Ruangsuwan, Chaiyot; Terdtoon, Pradit
2015-01-01
This research aimed to develop a web-based learning environment model for enhancing cognitive skills of undergraduate students in the field of electrical engineering. The research is divided into 4 phases: 1) investigating the current status and requirements of web-based learning environment models. 2) developing a web-based learning environment…
NASA Astrophysics Data System (ADS)
Valentine, Andrew; Belski, Iouri; Hamilton, Margaret
2017-11-01
Problem-solving is a key engineering skill, yet is an area in which engineering graduates underperform. This paper investigates the potential of using web-based tools to teach students problem-solving techniques without the need to make use of class time. An idea generation experiment involving 90 students was designed. Students were surveyed about their study habits and reported they use electronic-based materials more than paper-based materials while studying, suggesting students may engage with web-based tools. Students then generated solutions to a problem task using either a paper-based template or an equivalent web interface. Students who used the web-based approach performed as well as students who used the paper-based approach, suggesting the technique can be successfully adopted and taught online. Web-based tools may therefore be adopted as supplementary material in a range of engineering courses as a way to increase students' options for enhancing problem-solving skills.
A user-oriented web crawler for selectively acquiring online content in e-health research
Xu, Songhua; Yoon, Hong-Jun; Tourassi, Georgia
2014-01-01
Motivation: Life stories of diseased and healthy individuals are abundantly available on the Internet. Collecting and mining such online content can offer many valuable insights into patients’ physical and emotional states throughout the pre-diagnosis, diagnosis, treatment and post-treatment stages of the disease compared with those of healthy subjects. However, such content is widely dispersed across the web. Using traditional query-based search engines to manually collect relevant materials is rather labor intensive and often incomplete due to resource constraints in terms of human query composition and result parsing efforts. The alternative option, blindly crawling the whole web, has proven inefficient and unaffordable for e-health researchers. Results: We propose a user-oriented web crawler that adaptively acquires user-desired content on the Internet to meet the specific online data source acquisition needs of e-health researchers. Experimental results on two cancer-related case studies show that the new crawler can substantially accelerate the acquisition of highly relevant online content compared with the existing state-of-the-art adaptive web crawling technology. For the breast cancer case study using the full training set, the new method achieves a cumulative precision between 74.7 and 79.4% after 5 h of execution till the end of the 20-h long crawling session as compared with the cumulative precision between 32.8 and 37.0% using the peer method for the same time period. For the lung cancer case study using the full training set, the new method achieves a cumulative precision between 56.7 and 61.2% after 5 h of execution till the end of the 20-h long crawling session as compared with the cumulative precision between 29.3 and 32.4% using the peer method. Using the reduced training set in the breast cancer case study, the cumulative precision of our method is between 44.6 and 54.9%, whereas the cumulative precision of the peer method is between 24.3 and 26.3%; for the lung cancer case study using the reduced training set, the cumulative precisions of our method and the peer method are, respectively, between 35.7 and 46.7% versus between 24.1 and 29.6%. These numbers clearly show a consistently superior accuracy of our method in discovering and acquiring user-desired online content for e-health research. Availability and implementation: The implementation of our user-oriented web crawler is freely available to non-commercial users via the following Web site: http://bsec.ornl.gov/AdaptiveCrawler.shtml. The Web site provides a step-by-step guide on how to execute the web crawler implementation. In addition, the Web site provides the two study datasets including manually labeled ground truth, initial seeds and the crawling results reported in this article. Contact: xus1@ornl.gov Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24078710
A Web Based Approach to Integrate Space Culture and Education
NASA Astrophysics Data System (ADS)
Gerla, F.
2002-01-01
Our intention is to dedicate a large section of our web site to space education. As the national User Support and Operation Center (USOC) for the International Space Station, MARS Center is also willing to provide material, such as videos and data, for educational purposes. In order to base our initiative on authoritative precedents, our first step has been a comparative analysis between different space agency education web sites, such as ESA and NASA. As is well known, Internet is a powerful reality, capable of connecting people all over the world and rendering public a huge amount of information. The first problem, then, is to organize this information, in order to use the web as an efficient education tool. That is why studies such as User Modeling (UM), Human Computer Interaction (HCI) and Semantic Web have become more important in Information Technology and Science. Traditional search engines are unable to provide an optimal retrieval of contents really searched for by users. Semantic Web is a valid alternative: according to its theories, web information should be represented using metadata language. Users should be able and enabled to successfully search, obtain and study new information from web. Forging knowledge in an intelligent manner, preventing users from making errors, and making this formidable quantity of information easily available have also been the starting points for HCI methodologies for defining Adaptable Interfaces. Here the information is divided into different sets, on the basis of the intended user profile, in order to prevent users from getting lost. Realized as an adaptable interface, an education web site can help users to effectively retrieve the information necessary for their scopes (teaching for a teacher and learning for a student). For students it's a great advantage to use interfaces designed on the basis of their age and scholastic level. Indeed, an adaptable interface is intended not just for students, but also for teachers, who can use it to prepare their lessons, retrieve information and organize the didactic material in order to support their lessons. We think it important to use a user centered "psychology" based on UM: we have to know the needs and expectations of the students. Our intent is to use usability tests not just to prove the site effectiveness and clearness, but also to investigate aesthetical preferences of children and young people. Physics, mathematics, chemistry are just some of the difficult learning fields connected with space technologies. Space culture is a potentially never-ending field, and our scope will be to lead students by hand in this universe of knowledge. This paper will present MARS activities in the framework of the above methodologies aimed at implementing a web based approach to integrate space culture and education. The activities are already in progress and some results will be presented in the final paper.
The Effectiveness of Web Search Engines to Index New Sites from Different Countries
ERIC Educational Resources Information Center
Pirkola, Ari
2009-01-01
Introduction: Investigates how effectively Web search engines index new sites from different countries. The primary interest is whether new sites are indexed equally or whether search engines are biased towards certain countries. If major search engines show biased coverage it can be considered a significant economic and political problem because…
distributed computing, Web information systems engineering, software engineering, computer graphics, and Dashboard, NREL Energy Story visualization, Green Button data integration, as well as a large number of Web of an R&D 100 Award. Prior to joining NREL, Alex worked as a system administrator, Web developer
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.
2017-05-01
While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.
Estimating search engine index size variability: a 9-year longitudinal study.
van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice
One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.
Network Analysis of Reconnaissance and Intrusion of an Industrial Control System
2016-09-01
simulated a plant engineer using the engineering workstation web browser to authenticate to the vegetable cooker HMI. While the engineer established the...observed the vegetable cooker HMI web display, the attacker stopped capturing network traffic. Acting as the attacker, we searched the attacker’s pcap...manually controlled by human activity. In this testbed network, only web browser traffic (HTTP) is created by an operator to view an HMI status
Google matrix analysis of directed networks
NASA Astrophysics Data System (ADS)
Ermann, Leonardo; Frahm, Klaus M.; Shepelyansky, Dima L.
2015-10-01
In the past decade modern societies have developed enormous communication and social networks. Their classification and information retrieval processing has become a formidable task for the society. Because of the rapid growth of the World Wide Web, and social and communication networks, new mathematical methods have been invented to characterize the properties of these networks in a more detailed and precise way. Various search engines extensively use such methods. It is highly important to develop new tools to classify and rank a massive amount of network information in a way that is adapted to internal network structures and characteristics. This review describes the Google matrix analysis of directed complex networks demonstrating its efficiency using various examples including the World Wide Web, Wikipedia, software architectures, world trade, social and citation networks, brain neural networks, DNA sequences, and Ulam networks. The analytical and numerical matrix methods used in this analysis originate from the fields of Markov chains, quantum chaos, and random matrix theory.
Using AI and Semantic Web Technologies to attack Process Complexity in Open Systems
NASA Astrophysics Data System (ADS)
Thompson, Simon; Giles, Nick; Li, Yang; Gharib, Hamid; Nguyen, Thuc Duong
Recently many vendors and groups have advocated using BPEL and WS-BPEL as a workflow language to encapsulate business logic. While encapsulating workflow and process logic in one place is a sensible architectural decision the implementation of complex workflows suffers from the same problems that made managing and maintaining hierarchical procedural programs difficult. BPEL lacks constructs for logical modularity such as the requirements construct from the STL [12] or the ability to adapt constructs like pure abstract classes for the same purpose. We describe a system that uses semantic web and agent concepts to implement an abstraction layer for BPEL based on the notion of Goals and service typing. AI planning was used to enable process engineers to create and validate systems that used services and goals as first class concepts and compiled processes at run time for execution.
Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G
2016-01-01
Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p < .01) and covered fewer important HPV-related topics (p < .001). Critical Web pages required viewers to have higher reading skills, were less likely to include an author byline, and were more likely to include testimonial accounts. They also were more likely to raise unsubstantiated concerns about vaccination. Web pages critical of HPV vaccine may be frequently returned and highly ranked by search engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Behavior of an adaptive bio-inspired spider web
NASA Astrophysics Data System (ADS)
Zheng, Lingyue; Behrooz, Majid; Huie, Andrew; Hartman, Alex; Gordaninejad, Faramarz
2015-03-01
The goal of this study is to demonstrate the feasibility of an artificial adaptive spider web with comparable behavior to a real spider web. First, the natural frequency and energy absorption ability of a passive web is studied. Next, a control system that consists of stepper motors, load cells and an Arduino, is constructed to mimic a spider's ability to control the tension of radial strings in the web. The energy related characteristics in the artificial spider web is examined while the pre-tension of the radial strings are varied. Various mechanical properties of a damaged spider web are adjusted to study their effect on the behavior of the web. It is demonstrated that the pre-tension and stiffness of the web's radial strings can significantly affect the natural frequency and the total energy of the full and damaged webs.
Agility: Agent - Ility Architecture
2002-10-01
existing and emerging standards (e.g., distributed objects, email, web, search engines , XML, Java, Jini). Three agent system components resulted from...agents and other Internet resources and operate over the web (AgentGram), a yellow pages service that uses Internet search engines to locate XML ads for agents and other Internet resources (WebTrader).
Engineering Laboratory Instruction in Virtual Environment--"eLIVE"
ERIC Educational Resources Information Center
Chaturvedi, Sushil; Prabhakaran, Ramamurthy; Yoon, Jaewan; Abdel-Salam, Tarek
2011-01-01
A novel application of web-based virtual laboratories to prepare students for physical experiments is explored in some detail. The pedagogy of supplementing physical laboratory with web-based virtual laboratories is implemented by developing a web-based tool, designated in this work as "eLIVE", an acronym for Engineering Laboratory…
2015-01-01
1 3.0 Methods, Assumptions, and Procedures ...18 4.6.3. LineUp Web... Procedures A search of the internet looking at web sites specializing in graphics, graphics engines, web browser applications, and games was conducted to
Incorporating the Internet into Traditional Library Instruction.
ERIC Educational Resources Information Center
Fonseca, Tony; King, Monica
2000-01-01
Presents a template for teaching traditional library research and one for incorporating the Web. Highlights include the differences between directories and search engines; devising search strategies; creating search terms; how to choose search engines; evaluating online resources; helpful Web sites; and how to read URLs to evaluate a Web site's…
Children's Search Engines from an Information Search Process Perspective.
ERIC Educational Resources Information Center
Broch, Elana
2000-01-01
Describes cognitive and affective characteristics of children and teenagers that may affect their Web searching behavior. Reviews literature on children's searching in online public access catalogs (OPACs) and using digital libraries. Profiles two Web search engines. Discusses some of the difficulties children have searching the Web, in the…
Searchers Net Treasure in Monterey.
ERIC Educational Resources Information Center
McDermott, Irene E.
1999-01-01
Reports on Web keyword searching, metadata, Dublin Core, Extensible Markup Language (XML), metasearch engines (metasearch engines search several Web indexes and/or directories and/or Usenet and/or specific Web sites), and the Year 2000 (Y2K) dilemma, all topics discussed at the second annual Internet Librarian Conference sponsored by Information…
Searching for American Indian Resources on the Internet.
ERIC Educational Resources Information Center
Pollack, Ira; Derby, Amy
This paper provides basic information on searching the Internet and lists World Wide Web sites containing resources for American Indian education. Comprehensive and topical Web directories, search engines, and meta-search engines are briefly described. Search strategies are discussed, and seven Web sites are listed that provide more advanced…
Jafarpour, Borna; Abidi, Samina Raza; Abidi, Syed Sibte Raza
2016-01-01
Computerizing paper-based CPG and then executing them can provide evidence-informed decision support to physicians at the point of care. Semantic web technologies especially web ontology language (OWL) ontologies have been profusely used to represent computerized CPG. Using semantic web reasoning capabilities to execute OWL-based computerized CPG unties them from a specific custom-built CPG execution engine and increases their shareability as any OWL reasoner and triple store can be utilized for CPG execution. However, existing semantic web reasoning-based CPG execution engines suffer from lack of ability to execute CPG with high levels of expressivity, high cognitive load of computerization of paper-based CPG and updating their computerized versions. In order to address these limitations, we have developed three CPG execution engines based on OWL 1 DL, OWL 2 DL and OWL 2 DL + semantic web rule language (SWRL). OWL 1 DL serves as the base execution engine capable of executing a wide range of CPG constructs, however for executing highly complex CPG the OWL 2 DL and OWL 2 DL + SWRL offer additional executional capabilities. We evaluated the technical performance and medical correctness of our execution engines using a range of CPG. Technical evaluations show the efficiency of our CPG execution engines in terms of CPU time and validity of the generated recommendation in comparison to existing CPG execution engines. Medical evaluations by domain experts show the validity of the CPG-mediated therapy plans in terms of relevance, safety, and ordering for a wide range of patient scenarios.
Integrated gas turbine engine-nacelle
NASA Technical Reports Server (NTRS)
Adamson, A. P.; Sargisson, D. F.; Stotler, C. L., Jr. (Inventor)
1979-01-01
A nacelle for use with a gas turbine engine is provided with an integral webbed structure resembling a spoked wheel for rigidly interconnecting the nacelle and engine. The nacelle is entirely supported in its spacial relationship with the engine by means of the webbed structure. The inner surface of the nacelle defines the outer limits of the engine motive fluid flow annulus, while the outer surface of the nacelle defines a streamlined envelope for the engine.
Spiders and Worms and Crawlers, Oh My: Searching on the World Wide Web.
ERIC Educational Resources Information Center
Eagan, Ann; Bender, Laura
Searching on the world wide web can be confusing. A myriad of search engines exist, often with little or no documentation, and many of these search engines work differently from the standard search engines people are accustomed to using. Intended for librarians, this paper defines search engines, directories, spiders, and robots, and covers basics…
Workflow and web application for annotating NCBI BioProject transcriptome data
Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A.; Barrero, Luz S.; Landsman, David
2017-01-01
Abstract The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. Database URL: http://www.ncbi.nlm.nih.gov/projects/physalis/ PMID:28605765
Sexual information seeking on web search engines.
Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles
2004-02-01
Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed.
OLIVER: an online library of images for veterinary education and research.
McGreevy, Paul; Shaw, Tim; Burn, Daniel; Miller, Nick
2007-01-01
As part of a strategic move by the University of Sydney toward increased flexibility in learning, the Faculty of Veterinary Science undertook a number of developments involving Web-based teaching and assessment. OLIVER underpins them by providing a rich, durable repository for learning objects. To integrate Web-based learning, case studies, and didactic presentations for veterinary and animal science students, we established an online library of images and other learning objects for use by academics in the Faculties of Veterinary Science and Agriculture. The objectives of OLIVER were to maximize the use of the faculty's teaching resources by providing a stable archiving facility for graphic images and other multimedia learning objects that allows flexible and precise searching, integrating indexing standards, thesauri, pull-down lists of preferred terms, and linking of objects within cases. OLIVER offers a portable and expandable Web-based shell that facilitates ongoing storage of learning objects in a range of media. Learning objects can be downloaded in common, standardized formats so that they can be easily imported for use in a range of applications, including Microsoft PowerPoint, WebCT, and Microsoft Word. OLIVER now contains more than 9,000 images relating to many facets of veterinary science; these are annotated and supported by search engines that allow rapid access to both images and relevant information. The Web site is easily updated and adapted as required.
Publicizing Your Web Resources for Maximum Exposure.
ERIC Educational Resources Information Center
Smith, Kerry J.
2001-01-01
Offers advice to librarians for marketing their Web sites on Internet search engines. Advises against relying solely on spiders and recommends adding metadata to the source code and delivering that information directly to the search engines. Gives an overview of metadata and typical coding for meta tags. Includes Web addresses for a number of…
Staleness Among Web Search Engines.
ERIC Educational Resources Information Center
Koehler, Wallace
1998-01-01
Describes a study of four major Web search engines that tested for staleness, a condition when a significant number of the hits it returns point to Web pages or server-level domains (SLD) that are no longer viable. Results of tests of URLs with AltaVista, HotBot, InfoSeek, and Open Text are discussed. (Author/LRW)
Discovering How Students Search a Library Web Site: A Usability Case Study.
ERIC Educational Resources Information Center
Augustine, Susan; Greene, Courtney
2002-01-01
Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…
ERIC Educational Resources Information Center
Turner, Laura
2001-01-01
Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…
F-OWL: An Inference Engine for Semantic Web
NASA Technical Reports Server (NTRS)
Zou, Youyong; Finin, Tim; Chen, Harry
2004-01-01
Understanding and using the data and knowledge encoded in semantic web documents requires an inference engine. F-OWL is an inference engine for the semantic web language OWL language based on F-logic, an approach to defining frame-based systems in logic. F-OWL is implemented using XSB and Flora-2 and takes full advantage of their features. We describe how F-OWL computes ontology entailment and compare it with other description logic based approaches. We also describe TAGA, a trading agent environment that we have used as a test bed for F-OWL and to explore how multiagent systems can use semantic web concepts and technology.
ERIC Educational Resources Information Center
Lo, Jia-Jiunn; Chan, Ya-Chen; Yeh, Shiou-Wen
2012-01-01
This study developed an adaptive web-based learning system focusing on students' cognitive styles. The system is composed of a student model and an adaptation model. It collected students' browsing behaviors to update the student model for unobtrusively identifying student cognitive styles through a multi-layer feed-forward neural network (MLFF).…
A Web-Based Adaptive Tutor to Teach PCR Primer Design
ERIC Educational Resources Information Center
van Seters, Janneke R.; Wellink, Joan; Tramper, Johannes; Goedhart, Martin J.; Ossevoort, Miriam A.
2012-01-01
When students have varying prior knowledge, personalized instruction is desirable. One way to personalize instruction is by using adaptive e-learning to offer training of varying complexity. In this study, we developed a web-based adaptive tutor to teach PCR primer design: the PCR Tutor. We used part of the Taxonomy of Educational Objectives (the…
ERIC Educational Resources Information Center
Hock, Randolph
This book aims to facilitate more effective and efficient use of World Wide Web search engines by helping the reader: know the basic structure of the major search engines; become acquainted with those attributes (features, benefits, options, content, etc.) that search engines have in common and where they differ; know the main strengths and…
47 CFR 5.55 - Filing of applications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... instead be submitted to the Commission's Office of Engineering and Technology, Washington, DC 20554...
47 CFR 5.55 - Filing of applications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... instead be submitted to the Commission's Office of Engineering and Technology, Washington, DC 20554...
Search Engines on the World Wide Web.
ERIC Educational Resources Information Center
Walster, Dian
1997-01-01
Discusses search engines and provides methods for determining what resources are searched, the quality of the information, and the algorithms used that will improve the use of search engines on the World Wide Web, online public access catalogs, and electronic encyclopedias. Lists strategies for conducting searches and for learning about the latest…
Interactive Information Organization: Techniques and Evaluation
2001-05-01
information search and access. Locating interesting information on the World Wide Web is the main task of on-line search engines . Such engines accept a...likelihood of being relevant to the user’s request. The majority of today’s Web search engines follow this scenario. The ordering of documents in the
47 CFR 5.55 - Filing of applications.
Code of Federal Regulations, 2012 CFR
2012-10-01
... the Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... Office of Engineering and Technology Web site https://gullfoss2.fcc.gov/prod/oet/cf/els/index.cfm... instead be submitted to the Commission's Office of Engineering and Technology, Washington, DC 20554...
Quality analysis of patient information about knee arthroscopy on the World Wide Web.
Sambandam, Senthil Nathan; Ramasamy, Vijayaraj; Priyanka, Priyanka; Ilango, Balakrishnan
2007-05-01
This study was designed to ascertain the quality of patient information available on the World Wide Web on the topic of knee arthroscopy. For the purpose of quality analysis, we used a pool of 232 search results obtained from 7 different search engines. We used a modified assessment questionnaire to assess the quality of these Web sites. This questionnaire was developed based on similar studies evaluating Web site quality and includes items on illustrations, accessibility, availability, accountability, and content of the Web site. We also compared results obtained with different search engines and tried to establish the best possible search strategy to attain the most relevant, authentic, and adequate information with minimum time consumption. For this purpose, we first compared 100 search results from the single most commonly used search engine (AltaVista) with the pooled sample containing 20 search results from each of the 7 different search engines. The search engines used were metasearch (Copernic and Mamma), general search (Google, AltaVista, and Yahoo), and health topic-related search engines (MedHunt and Healthfinder). The phrase "knee arthroscopy" was used as the search terminology. Excluding the repetitions, there were 117 Web sites available for quality analysis. These sites were analyzed for accessibility, relevance, authenticity, adequacy, and accountability by use of a specially designed questionnaire. Our analysis showed that most of the sites providing patient information on knee arthroscopy contained outdated information, were inadequate, and were not accountable. Only 16 sites were found to be providing reasonably good patient information and hence can be recommended to patients. Understandably, most of these sites were from nonprofit organizations and educational institutions. Furthermore, our study revealed that using multiple search engines increases patients' chances of obtaining more relevant information rather than using a single search engine. Our study shows the difficulties encountered by patients in obtaining information regarding knee arthroscopy and highlights the duty of knee surgeons in helping patients to identify the relevant and authentic information in the most efficient manner from the World Wide Web. This study highlights the importance of the role of orthopaedic surgeons in helping their patients to identify the best possible information on the World Wide Web.
GoWeb: a semantic search engine for the life science web.
Dietze, Heiko; Schroeder, Michael
2009-10-01
Current search engines are keyword-based. Semantic technologies promise a next generation of semantic search engines, which will be able to answer questions. Current approaches either apply natural language processing to unstructured text or they assume the existence of structured statements over which they can reason. Here, we introduce a third approach, GoWeb, which combines classical keyword-based Web search with text-mining and ontologies to navigate large results sets and facilitate question answering. We evaluate GoWeb on three benchmarks of questions on genes and functions, on symptoms and diseases, and on proteins and diseases. The first benchmark is based on the BioCreAtivE 1 Task 2 and links 457 gene names with 1352 functions. GoWeb finds 58% of the functional GeneOntology annotations. The second benchmark is based on 26 case reports and links symptoms with diseases. GoWeb achieves 77% success rate improving an existing approach by nearly 20%. The third benchmark is based on 28 questions in the TREC genomics challenge and links proteins to diseases. GoWeb achieves a success rate of 79%. GoWeb's combination of classical Web search with text-mining and ontologies is a first step towards answering questions in the biomedical domain. GoWeb is online at: http://www.gopubmed.org/goweb.
Web sites for postpartum depression: convenient, frustrating, incomplete, and misleading.
Summers, Audra L; Logsdon, M Cynthia
2005-01-01
To evaluate the content and the technology of Web sites providing information on postpartum depression. Eleven search engines were queried using the words "Postpartum Depression." The top 10 sites in each search engine were evaluated for correct content and technology using the Web Depression Tool, based on the Technology Assessment Model. Of the 36 unique Web sites located, 34 were available to review. Only five Web sites provided >75% correct responses to questions that summarized the current state of the science for postpartum depression. Eleven of the Web sites contained little or no useful information about postpartum depression, despite being among the first 10 Web sites listed by the search engine. Some Web sites contained possibly harmful suggestions for treatment of postpartum depression. In addition, there are many problems with the technology of Web sites providing information on postpartum depression. A better Web site for postpartum depression is necessary if we are to meet the needs of consumers for accurate and current information using technology that enhances learning. Since patient education is a core competency for nurses, it is essential that nurses understand how their patients are using the World Wide Web for learning and how we can assist our patients to find appropriate sites containing correct information.
A Web-Remote/Robotic/Scheduled Astronomical Data Acquisition System
NASA Astrophysics Data System (ADS)
Denny, Robert
2011-03-01
Traditionally, remote/robotic observatory operating systems have been custom made for each observatory. While data reduction pipelines need to be tailored for each investigation, the data acquisition process (especially for stare-mode optical images) is often quite similar across investigations. Since 1999, DC-3 Dreams has focused on providing and supporting a remote/robotic observatory operating system which can be adapted to a wide variety of physical hardware and optics while achieving the highest practical observing efficiency and safe/secure web browser user controls. ACP Expert consists of three main subsystems: (1) a robotic list-driven data acquisition engine which controls all aspects of the observatory, (2) a constraint-driven dispatch scheduler with a long-term database of requests, and (3) a built-in "zero admin" web server and dynamic web pages which provide a remote capability for immediate execution and monitoring as well as entry and monitoring of dispatch-scheduled observing requests. No remote desktop login is necessary for observing, thus keeping the system safe and consistent. All routine operation is via the web browser. A wide variety of telescope mounts, CCD imagers, guiding sensors, filter selectors, focusers, instrument-package rotators, weather sensors, and dome control systems are supported via the ASCOM standardized device driver architecture. The system is most commonly employed on commercial 1-meter and smaller observatories used by universities and advanced amateurs for both science and art. One current project, the AAVSO Photometric All-Sky Survey (APASS), uses ACP Expert to acquire large volumes of data in dispatch-scheduled mode. In its first 18 months of operation (North then South), 40,307 sky images were acquired in 117 photometric nights, resulting in 12,107,135 stars detected two or more times. These stars had measures in 5 filters. The northern station covered 754 fields (6446 square degrees) at least twice, the southern station covered 951 fields (8500 square degrees) at least twice. The database of photometric calibrations is available from AAVSO. The paper will cover the ACP web interface, including the use of AJAX and JSON within a micro-content framework, as well as dispatch scheduler and acquisition engine operation.
ERIC Educational Resources Information Center
Griffin, Teresa; Cohen, Deb
2012-01-01
The ubiquity and familiarity of the world wide web means that students regularly turn to it as a source of information. In doing so, they "are said to rely heavily on simple search engines, such as Google to find what they want." Researchers have also investigated how students use search engines, concluding that "the young web users tended to…
E-Referencer: Transforming Boolean OPACs to Web Search Engines.
ERIC Educational Resources Information Center
Khoo, Christopher S. G.; Poo, Danny C. C.; Toh, Teck-Kang; Hong, Glenn
E-Referencer is an expert intermediary system for searching library online public access catalogs (OPACs) on the World Wide Web. It is implemented as a proxy server that mediates the interaction between the user and Boolean OPACs. It transforms a Boolean OPAC into a retrieval system with many of the search capabilities of Web search engines.…
Web Search Engines: Key To Locating Information for All Users or Only the Cognoscenti?
ERIC Educational Resources Information Center
Tomaiuolo, Nicholas G.; Packer, Joan G.
This paper describes a study that attempted to ascertain the degree of success that undergraduates and graduate students, with varying levels of experience using the World Wide Web and Web search engines, and without librarian instruction or intervention, had in locating relevant material on specific topics furnished by the investigators. Because…
Promoting Teachers' Positive Attitude towards Web Use: A Study in Web Site Development
ERIC Educational Resources Information Center
Akpinar, Yavuz; Bayramoglu, Yusuf
2008-01-01
The purpose of the study was to examine effects of a compact training for developing web sites on teachers' web attitude, as composed of: web self efficacy, perceived web enjoyment, perceived web usefulness and behavioral intention to use the web. To measure the related constructs, the Web Attitude Scale was adapted into Turkish and tested with a…
Automatic document classification of biological literature
Chen, David; Müller, Hans-Michael; Sternberg, Paul W
2006-01-01
Background Document classification is a wide-spread problem with many applications, from organizing search engine snippets to spam filtering. We previously described Textpresso, a text-mining system for biological literature, which marks up full text according to a shallow ontology that includes terms of biological interest. This project investigates document classification in the context of biological literature, making use of the Textpresso markup of a corpus of Caenorhabditis elegans literature. Results We present a two-step text categorization algorithm to classify a corpus of C. elegans papers. Our classification method first uses a support vector machine-trained classifier, followed by a novel, phrase-based clustering algorithm. This clustering step autonomously creates cluster labels that are descriptive and understandable by humans. This clustering engine performed better on a standard test-set (Reuters 21578) compared to previously published results (F-value of 0.55 vs. 0.49), while producing cluster descriptions that appear more useful. A web interface allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. Conclusion We have demonstrated a simple method to classify biological documents that embodies an improvement over current methods. While the classification results are currently optimized for Caenorhabditis elegans papers by human-created rules, the classification engine can be adapted to different types of documents. We have demonstrated this by presenting a web interface that allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. PMID:16893465
Teen smoking cessation help via the Internet: a survey of search engines.
Edwards, Christine C; Elliott, Sean P; Conway, Terry L; Woodruff, Susan I
2003-07-01
The objective of this study was to assess Web sites related to teen smoking cessation on the Internet. Seven Internet search engines were searched using the keywords teen quit smoking. The top 20 hits from each search engine were reviewed and categorized. The keywords teen quit smoking produced between 35 and 400,000 hits depending on the search engine. Of 140 potential hits, 62% were active, unique sites; 85% were listed by only one search engine; and 40% focused on cessation. Findings suggest that legitimate on-line smoking cessation help for teens is constrained by search engine choice and the amount of time teens spend looking through potential sites. Resource listings should be updated regularly. Smoking cessation Web sites need to be picked up on multiple search engine searches. Further evaluation of smoking cessation Web sites need to be conducted to identify the most effective help for teens.
Social Dimension of Web 2.0 in Engineering Education
ERIC Educational Resources Information Center
Ahrens, Andreas; Zascerinska, Jelena
2010-01-01
Contemporary engineers need to become more cognizant and more responsive to the emerging needs of the market for engineering and technology services. Social dimension of Web 2.0 which penetrates our society more thoroughly with the availability of broadband services has the potential to contribute decisively to the sustainable development of…
Social Dimension of Web 2.0 in Engineering Education: Students' View
ERIC Educational Resources Information Center
Zascerinska, Jelena; Bassus, Olaf; Ahrens, Andreas
2010-01-01
Contemporary engineers need to become more cognizant and more responsive to the emerging needs of the market for engineering and technology services. Social dimension of Web 2.0 which penetrates our society more thoroughly with the availability of broadband services has the potential to contribute decisively to the sustainable development of…
Effects of Web-Based Interactive Modules on Engineering Students' Learning Motivations
ERIC Educational Resources Information Center
Bai, Haiyan; Aman, Amjad; Xu, Yunjun; Orlovskaya, Nina; Zhou, Mingming
2016-01-01
The purpose of this study is to assess the impact of a newly developed modules, Interactive Web-Based Visualization Tools for Gluing Undergraduate Fuel Cell Systems Courses system (IGLU), on learning motivations of engineering students using two samples (n[subscript 1] = 144 and n[subscript 2] = 135) from senior engineering classes. The…
Finding Information on the World Wide Web: The Retrieval Effectiveness of Search Engines.
ERIC Educational Resources Information Center
Pathak, Praveen; Gordon, Michael
1999-01-01
Describes a study that examined the effectiveness of eight search engines for the World Wide Web. Calculated traditional information-retrieval measures of recall and precision at varying numbers of retrieved documents to use as the bases for statistical comparisons of retrieval effectiveness. Also examined the overlap between search engines.…
NASA Technical Reports Server (NTRS)
Scott, David W.
2010-01-01
The Mission Operations Laboratory (MOL) at Marshall Space Flight Center (MSFC) is responsible for Engineering Support capability for NASA s Ares rocket development and operations. In pursuit of this, MOL is building the Ares Engineering and Operations Network (AEON), a web-based portal to support and simplify two critical activities: Access and analyze Ares manufacturing, test, and flight performance data, with access to Shuttle data for comparison Establish and maintain collaborative communities within the Ares teams/subteams and with other projects, e.g., Space Shuttle, International Space Station (ISS). AEON seeks to provide a seamless interface to a) locally developed engineering applications and b) a Commercial-Off-The-Shelf (COTS) collaborative environment that includes Web 2.0 capabilities, e.g., blogging, wikis, and social networking. This paper discusses how Web 2.0 might be applied to the typically conservative engineering support arena, based on feedback from Integration, Verification, and Validation (IV&V) testing and on searching for their use in similar environments.
A novel architecture for information retrieval system based on semantic web
NASA Astrophysics Data System (ADS)
Zhang, Hui
2011-12-01
Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.
A Web Based Collaborative Design Environment for Spacecraft
NASA Technical Reports Server (NTRS)
Dunphy, Julia
1998-01-01
In this era of shrinking federal budgets in the USA we need to dramatically improve our efficiency in the spacecraft engineering design process. We have come up with a method which captures much of the experts' expertise in a dataflow design graph: Seamlessly connectable set of local and remote design tools; Seamlessly connectable web based design tools; and Web browser interface to the developing spacecraft design. We have recently completed our first web browser interface and demonstrated its utility in the design of an aeroshell using design tools located at web sites at three NASA facilities. Multiple design engineers and managers are now able to interrogate the design engine simultaneously and find out what the design looks like at any point in the design cycle, what its parameters are, and how it reacts to adverse space environments.
Flood AI: An Intelligent Systems for Discovery and Communication of Disaster Knowledge
NASA Astrophysics Data System (ADS)
Demir, I.; Sermet, M. Y.
2017-12-01
Communities are not immune from extreme events or natural disasters that can lead to large-scale consequences for the nation and public. Improving resilience to better prepare, plan, recover, and adapt to disasters is critical to reduce the impacts of extreme events. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This project presents an intelligent system, Flood AI, for flooding to improve societal preparedness by providing a knowledge engine using voice recognition, artificial intelligence, and natural language processing based on a generalized ontology for disasters with a primary focus on flooding. The knowledge engine utilizes the flood ontology and concepts to connect user input to relevant knowledge discovery channels on flooding by developing a data acquisition and processing framework utilizing environmental observations, forecast models, and knowledge bases. Communication channels of the framework includes web-based systems, agent-based chat bots, smartphone applications, automated web workflows, and smart home devices, opening the knowledge discovery for flooding to many unique use cases.
Can Interactive Web-Based CAD Tools Improve the Learning of Engineering Drawing? A Case Study
ERIC Educational Resources Information Center
Pando Cerra, Pablo; Suárez González, Jesús M.; Busto Parra, Bernardo; Rodríguez Ortiz, Diana; Álvarez Peñín, Pedro I.
2014-01-01
Many current Web-based learning environments facilitate the theoretical teaching of a subject but this may not be sufficient for those disciplines that require a significant use of graphic mechanisms to resolve problems. This research study looks at the use of an environment that can help students learn engineering drawing with Web-based CAD…
Information Retrieval for Education: Making Search Engines Language Aware
ERIC Educational Resources Information Center
Ott, Niels; Meurers, Detmar
2010-01-01
Search engines have been a major factor in making the web the successful and widely used information source it is today. Generally speaking, they make it possible to retrieve web pages on a topic specified by the keywords entered by the user. Yet web searching currently does not take into account which of the search results are comprehensible for…
Uncovering the Hidden Web, Part I: Finding What the Search Engines Don't. ERIC Digest.
ERIC Educational Resources Information Center
Mardis, Marcia
Currently, the World Wide Web contains an estimated 7.4 million sites (OCLC, 2001). Yet even the most experienced searcher, using the most robust search engines, can access only about 16% of these pages (Dahn, 2001). The other 84% of the publicly available information on the Web is referred to as the "hidden,""invisible," or…
Adding a Visualization Feature to Web Search Engines: It’s Time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.
Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t helpmore » but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.« less
An open-source, mobile-friendly search engine for public medical knowledge.
Samwald, Matthias; Hanbury, Allan
2014-01-01
The World Wide Web has become an important source of information for medical practitioners. To complement the capabilities of currently available web search engines we developed FindMeEvidence, an open-source, mobile-friendly medical search engine. In a preliminary evaluation, the quality of results from FindMeEvidence proved to be competitive with those from TRIP Database, an established, closed-source search engine for evidence-based medicine.
MetaSEEk: a content-based metasearch engine for images
NASA Astrophysics Data System (ADS)
Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu
1997-12-01
Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.
Automating Information Discovery Within the Invisible Web
NASA Astrophysics Data System (ADS)
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
Workflow and web application for annotating NCBI BioProject transcriptome data.
Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A; Barrero, Luz S; Landsman, David; Mariño-Ramírez, Leonardo
2017-01-01
The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. URL: http://www.ncbi.nlm.nih.gov/projects/physalis/. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.
The Impact of Subject Indexes on Semantic Indeterminacy in Enterprise Document Retrieval
ERIC Educational Resources Information Center
Schymik, Gregory
2012-01-01
Ample evidence exists to support the conclusion that enterprise search is failing its users. This failure is costing corporate America billions of dollars every year. Most enterprise search engines are built using web search engines as their foundations. These search engines are optimized for web use and are inadequate when used inside the…
Practical Tips and Strategies for Finding Information on the Internet.
ERIC Educational Resources Information Center
Armstrong, Rhonda; Flanagan, Lynn
This paper presents the most important concepts and techniques to use in successfully searching the major World Wide Web search engines and directories, explains the basics of how search engines work, and describes what is included in their indexes. Following an introduction that gives an overview of Web directories and search engines, the first…
www.teld.net: Online Courseware Engine for Teaching by Examples and Learning by Doing.
ERIC Educational Resources Information Center
Huang, G. Q.; Shen, B.; Mak, K. L.
2001-01-01
Describes TELD (Teaching by Examples and Learning by Doing), a Web-based online courseware engine for higher education. Topics include problem-based learning; project-based learning; case methods; TELD as a Web server; course materials; TELD as a search engine; and TELD as an online virtual classroom for electronic delivery of electronic…
Lateral cascade of indirect effects in food webs with different types of adaptive behavior.
Kamran-Disfani, Ahmad R; Golubski, Antonio J
2013-12-21
It is widely recognized that indirect effects due to adaptive behaviors can have important effects on food webs. One consequence may be to change how readily perturbations propagate through the web, because species' behaviors as well as densities may respond to perturbations. It is not well understood which types of behavior are more likely to facilitate versus inhibit propagation of disturbances through a food web, or how this might be affected by the shape of a food web or the patterns of interaction strengths within it. We model two simple, laterally expanded food webs (one with three trophic levels and one with four), and compare how various adaptive behaviors affect the potential for a newly introduced predator to change the equilibrium densities of distant species. Patterns of changes in response to the introduction were qualitatively similar across most models, as were the ways in which patterns of direct interaction strengths affected those responses. Depending on both the web structure and the specific adaptive behavior, the potential for density changes to propagate through the web could be either increased or diminished relative to the no-behavior model. Two behaviors allowed density changes to propagate through a four-level web that precluded such propagation in the no-behavior model, and each of these two behaviors led to qualitatively different patterns of density changes. In the one model (diet choice) in which density changes were able to propagate in both web structures, patterns of density changes differed qualitatively between webs. Some of our results flowed from the fact that behaviors did not interact directly in the systems we considered, so that indirect effects on distant species had to be at least partly density-mediated. Our models highlight this as an inherent limitation of considering in isolation behaviors that are strictly foraging-related or strictly defense-related, making a case for the value of simultaneously considering multiple interacting types of behavior in the same model. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bower, P.; Liddicoat (2), J.
2009-04-01
Brownfield Action (BA - http://www.brownfieldaction.org) is a web-based, interactive, three-dimensional digital space and learning simulation in which students form geotechnical consulting companies and work collaboratively to explore and solve problems in environmental forensics. BA is being used in the United States at 10 colleges and universities in earth, environmental, or engineering sciences undergraduate and graduate courses. As a semester-long activity or done in modular form for specific topics, BA encourages active learning that requires attention to detail, intuition, and positive interaction between peers that results in Phase 1 and Phase 2 Environmental Site Assessments. Besides use in higher education courses, BA also can be adapted for instruction to local, state, and federal governmental employees, and employees in industry where brownfields need to be investigated or require remediation.
NASA Technical Reports Server (NTRS)
Powers, Sheryll Goecke (Compiler)
1995-01-01
Flight research for the F-15 HIDEC (Highly Integrated Digital Electronic Control) program was completed at NASA Dryden Flight Research Center in the fall of 1993. The flight research conducted during the last two years of the HIDEC program included two principal experiments: (1) performance seeking control (PSC), an adaptive, real-time, on-board optimization of engine, inlet, and horizontal tail position on the F-15; and (2) propulsion controlled aircraft (PCA), an augmented flight control system developed for landings as well as up-and-away flight that used only engine thrust (flight controls locked) for flight control. In September 1994, the background details and results of the PSC and PCA experiments were presented in an electronic workshop, accessible through the Dryden World Wide Web (http://www.dfrc.nasa.gov/dryden.html) and as a compact disk.
ERIC Educational Resources Information Center
Williams, Sarah C.
2010-01-01
The purpose of this study was to investigate how federated search engines are incorporated into the Web sites of libraries in the Association of Research Libraries. In 2009, information was gathered for each library in the Association of Research Libraries with a federated search engine. This included the name of the federated search service and…
ERIC Educational Resources Information Center
Hung, Yen-Chu
2011-01-01
This study investigates the different effects of web-based and face-to-face discussion on computer engineering majors' performance using the Karnaugh map in digital logic design. Pretest and posttest scores for two treatment groups (web-based discussion and face-to-face discussion) and a control group were compared and subjected to covariance…
Drexel at TREC 2014 Federated Web Search Track
2014-11-01
of its input RS results. 1. INTRODUCTION Federated Web Search is the task of searching multiple search engines simultaneously and combining their...or distributed properly[5]. The goal of RS is then, for a given query, to select only the most promising search engines from all those available. Most...result pages of 149 search engines . 4000 queries are used in building the sample set. As a part of the Vertical Selection task, search engines are
An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.
Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J
2002-01-01
Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.
2015-04-29
in which we applied these adaptation patterns to an adaptive news web server intended to tolerate extremely heavy, unexpected loads. To address...collection of existing models used as benchmarks for OO-based refactoring and an existing web -based repository called REMODD to provide users with model...invariant properties. Specifically, we developed Avida- MDE (based on the Avida digital evolution platform) to support the automatic generation of software
Methodologies for Crawler Based Web Surveys.
ERIC Educational Resources Information Center
Thelwall, Mike
2002-01-01
Describes Web survey methodologies used to study the content of the Web, and discusses search engines and the concept of crawling the Web. Highlights include Web page selection methodologies; obstacles to reliable automatic indexing of Web sites; publicly indexable pages; crawling parameters; and tests for file duplication. (Contains 62…
Towards adaptation in e-learning 2.0
NASA Astrophysics Data System (ADS)
Cristea, Alexandra I.; Ghali, Fawaz
2011-04-01
This paper presents several essential steps from an overall study on shaping new ways of learning and teaching, by using the synergetic merger of three different fields: Web 2.0, e-learning and adaptation (in particular, personalisation to the learner). These novel teaching and learning ways-the latter focus of this paper-are reflected in and finally adding to various versions of the My Online Teacher 2.0 adaptive system. In particular, this paper focuses on a study of how to more effectively use and combine the recommendation of peers and content adaptation to enhance the learning outcome in e-learning systems based on Web 2.0. In order to better isolate and examine the effects of peer recommendation and adaptive content presentation, we designed experiments inspecting collaboration between individuals based on recommendation of peers who have greater knowledge, and compare this to adaptive content recommendation, as well as to "simple" learning in a system with a minimum of Web 2.0 support. Overall, the results of adding peer recommendation and adaptive content presentation were encouraging, and are further discussed in detail in this paper.
2009-06-01
search engines are not up to this task, as they have been optimized to catalog information quickly and efficiently for user ease of access while promoting retail commerce at the same time. This thesis presents a performance analysis of a new search engine algorithm designed to help find IED education networks using the Nutch open-source search engine architecture. It reveals which web pages are more important via references from other web pages regardless of domain. In addition, this thesis discusses potential evaluation and monitoring techniques to be used in conjunction
Evolution mediates the effects of apex predation on aquatic food webs
Urban, Mark C.
2013-01-01
Ecological and evolutionary mechanisms are increasingly thought to shape local community dynamics. Here, I evaluate if the local adaptation of a meso-predator to an apex predator alters local food webs. The marbled salamander (Ambystoma opacum) is an apex predator that consumes both the spotted salamander (Ambystoma maculatum) and shared zooplankton prey. Common garden experiments reveal that spotted salamander populations which co-occur with marbled salamanders forage more intensely than those that face other predator species. These foraging differences, in turn, alter the diversity, abundance and composition of zooplankton communities in common garden experiments and natural ponds. Locally adapted spotted salamanders exacerbate prey biomass declines associated with apex predation, but dampen the top-down effects of apex predation on prey diversity. Countergradient selection on foraging explains why locally adapted spotted salamanders exacerbate prey biomass declines. The two salamander species prefer different prey species, which explains why adapted spotted salamanders buffer changes in prey composition owing to apex predation. Results suggest that local adaptation can strongly mediate effects from apex predation on local food webs. Community ecologists might often need to consider the evolutionary history of populations to understand local diversity patterns, food web dynamics, resource gradients and their responses to disturbance. PMID:23720548
Evolution mediates the effects of apex predation on aquatic food webs.
Urban, Mark C
2013-07-22
Ecological and evolutionary mechanisms are increasingly thought to shape local community dynamics. Here, I evaluate if the local adaptation of a meso-predator to an apex predator alters local food webs. The marbled salamander (Ambystoma opacum) is an apex predator that consumes both the spotted salamander (Ambystoma maculatum) and shared zooplankton prey. Common garden experiments reveal that spotted salamander populations which co-occur with marbled salamanders forage more intensely than those that face other predator species. These foraging differences, in turn, alter the diversity, abundance and composition of zooplankton communities in common garden experiments and natural ponds. Locally adapted spotted salamanders exacerbate prey biomass declines associated with apex predation, but dampen the top-down effects of apex predation on prey diversity. Countergradient selection on foraging explains why locally adapted spotted salamanders exacerbate prey biomass declines. The two salamander species prefer different prey species, which explains why adapted spotted salamanders buffer changes in prey composition owing to apex predation. Results suggest that local adaptation can strongly mediate effects from apex predation on local food webs. Community ecologists might often need to consider the evolutionary history of populations to understand local diversity patterns, food web dynamics, resource gradients and their responses to disturbance.
Adaptive Social Learning Based on Crowdsourcing
ERIC Educational Resources Information Center
Karataev, Evgeny; Zadorozhny, Vladimir
2017-01-01
Many techniques have been developed to enhance learning experience with computer technology. A particularly great influence of technology on learning came with the emergence of the web and adaptive educational hypermedia systems. While the web enables users to interact and collaborate with each other to create, organize, and share knowledge via…
Personalisation in Web-Based Learning Environments
ERIC Educational Resources Information Center
Santally, Mohammad Issack; Alain, Senteni
2006-01-01
It is postulated that one of the main problems with e-learning environments is their lack of personalisation. This article presents a comprehensive review of the current work in the field and proposes a framework for research in promoting personalisation in Web-based learning environments. The concepts of adaptability, adaptivity and the…
The GBT Dynamic Scheduling System: Development and Testing
NASA Astrophysics Data System (ADS)
McCarty, M.; Clark, M.; Marganian, P.; O'Neil, K.; Shelton, A.; Sessoms, E.
2009-09-01
During the summer trimester of 2008, all observations on the Robert C. Byrd Green Bank Telescope (GBT) were scheduled using the new Dynamic Scheduling System (DSS). Beta testing exercised the policies, algorithms, and software developed for the DSS project. Since observers are located all over the world, the DSS was implemented as a web application. Technologies such as iCalendar, Really Simple Syndication (RSS) feeds, email, and instant messaging are used to transfer as much or as little information to observers as they request. We discuss the software engineering challenges leading to our implementation such as information distribution and building rich user interfaces in the web browser. We also relate our adaptation of agile development practices to design and develop the DSS. Additionally, we describe handling differences in expected versus actual initial conditions in the pool of project proposals for the 08B trimester. We then identify lessons learned from beta testing and present statistics on how the DSS was used during the trimester.
Concept Mapping Your Web Searches: A Design Rationale and Web-Enabled Application
ERIC Educational Resources Information Center
Lee, Y.-J.
2004-01-01
Although it has become very common to use World Wide Web-based information in many educational settings, there has been little research on how to better search and organize Web-based information. This paper discusses the shortcomings of Web search engines and Web browsers as learning environments and describes an alternative Web search environment…
Adapting the iSNOBAL model for improved visualization in a GIS environment
NASA Astrophysics Data System (ADS)
Johansen, W. J.; Delparte, D.
2014-12-01
Snowmelt is a primary means of crucial water resources in much of the western United States. Researchers are developing models that estimate snowmelt to aid in water resource management. One such model is the image snowcover energy and mass balance (iSNOBAL) model. It uses input climate grids to simulate the development and melting of snowpack in mountainous regions. This study looks at applying this model to the Reynolds Creek Experimental Watershed in southwestern Idaho, utilizing novel approaches incorporating geographic information systems (GIS). To improve visualization of the iSNOBAL model, we have adapted it to run in a GIS environment. This type of environment is suited to both the input grid creation and the visualization of results. The data used for input grid creation can be stored locally or on a web-server. Kriging interpolation embedded within Python scripts are used to create air temperature, soil temperature, humidity, and precipitation grids, while built-in GIS and existing tools are used to create solar radiation and wind grids. Additional Python scripting is then used to perform model calculations. The final product is a user-friendly and accessible version of the iSNOBAL model, including the ability to easily visualize and interact with model results, all within a web- or desktop-based GIS environment. This environment allows for interactive manipulation of model parameters and visualization of the resulting input grids for the model calculations. Future work is moving towards adapting the model further for use in a 3D gaming engine for improved visualization and interaction.
[Development of domain specific search engines].
Takai, T; Tokunaga, M; Maeda, K; Kaminuma, T
2000-01-01
As cyber space exploding in a pace that nobody has ever imagined, it becomes very important to search cyber space efficiently and effectively. One solution to this problem is search engines. Already a lot of commercial search engines have been put on the market. However these search engines respond with such cumbersome results that domain specific experts can not tolerate. Using a dedicate hardware and a commercial software called OpenText, we have tried to develop several domain specific search engines. These engines are for our institute's Web contents, drugs, chemical safety, endocrine disruptors, and emergent response for chemical hazard. These engines have been on our Web site for testing.
Getting to the top of Google: search engine optimization.
Maley, Catherine; Baum, Neil
2010-01-01
Search engine optimization is the process of making your Web site appear at or near the top of popular search engines such as Google, Yahoo, and MSN. This is not done by luck or knowing someone working for the search engines but by understanding the process of how search engines select Web sites for placement on top or on the first page. This article will review the process and provide methods and techniques to use to have your site rated at the top or very near the top.
The Role of Human Web Assistants in E-Commerce: An Analysis and a Usability Study.
ERIC Educational Resources Information Center
Aberg, Johan; Shahmehri, Nahid
2000-01-01
Discusses electronic commerce and presents the concept of Web assistants, human assistants working in an electronic Web shop. Presents results of a usability study of a prototype adaptive Web assistant system that show users were enthusiastic about the concept of Web assistants and its implications. (Author/LRW)
De Smet, Bart; Fournier, Jérôme; De Troch, Marleen; Vincx, Magda; Vanaverbeke, Jan
2015-01-01
The potential of ecosystem engineers to modify the structure and dynamics of food webs has recently been hypothesised from a conceptual point of view. Empirical data on the integration of ecosystem engineers and food webs is however largely lacking. This paper investigates the hypothesised link based on a field sampling approach of intertidal biogenic aggregations created by the ecosystem engineer Lanice conchilega (Polychaeta, Terebellidae). The aggregations are known to have a considerable impact on the physical and biogeochemical characteristics of their environment and subsequently on the abundance and biomass of primary food sources and the macrofaunal (i.e. the macro-, hyper- and epibenthos) community. Therefore, we hypothesise that L. conchilega aggregations affect the structure, stability and isotopic niche of the consumer assemblage of a soft-bottom intertidal food web. Primary food sources and the bentho-pelagic consumer assemblage of a L. conchilega aggregation and a control area were sampled on two soft-bottom intertidal areas along the French coast and analysed for their stable isotopes. Despite the structural impacts of the ecosystem engineer on the associated macrofaunal community, the presence of L. conchilega aggregations only has a minor effect on the food web structure of soft-bottom intertidal areas. The isotopic niche width of the consumer communities of the L. conchilega aggregations and control areas are highly similar, implying that consumer taxa do not shift their diet when feeding in a L. conchilega aggregation. Besides, species packing and hence trophic redundancy were not affected, pointing to an unaltered stability of the food web in the presence of L. conchilega. PMID:26496349
ERIC Educational Resources Information Center
Valentine, Andrew; Belski, Iouri; Hamilton, Margaret
2017-01-01
Problem-solving is a key engineering skill, yet is an area in which engineering graduates underperform. This paper investigates the potential of using web-based tools to teach students problem-solving techniques without the need to make use of class time. An idea generation experiment involving 90 students was designed. Students were surveyed…
Index Compression and Efficient Query Processing in Large Web Search Engines
ERIC Educational Resources Information Center
Ding, Shuai
2013-01-01
The inverted index is the main data structure used by all the major search engines. Search engines build an inverted index on their collection to speed up query processing. As the size of the web grows, the length of the inverted list structures, which can easily grow to hundreds of MBs or even GBs for common terms (roughly linear in the size of…
An Analysis of Web Image Queries for Search.
ERIC Educational Resources Information Center
Pu, Hsiao-Tieh
2003-01-01
Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)
Automatic generation of Web mining environments
NASA Astrophysics Data System (ADS)
Cibelli, Maurizio; Costagliola, Gennaro
1999-02-01
The main problem related to the retrieval of information from the world wide web is the enormous number of unstructured documents and resources, i.e., the difficulty of locating and tracking appropriate sources. This paper presents a web mining environment (WME), which is capable of finding, extracting and structuring information related to a particular domain from web documents, using general purpose indices. The WME architecture includes a web engine filter (WEF), to sort and reduce the answer set returned by a web engine, a data source pre-processor (DSP), which processes html layout cues in order to collect and qualify page segments, and a heuristic-based information extraction system (HIES), to finally retrieve the required data. Furthermore, we present a web mining environment generator, WMEG, that allows naive users to generate a WME specific to a given domain by providing a set of specifications.
ICTNET at Web Track 2009 Diversity task
2009-11-01
performance. On the World Wide Web, there exist many documents which represents several implicit subtopics. We used commerce search engines to gather those...documents. In this task, our work can be divided into five steps. First, we collect documents returned by commerce search engines , and considered
Impact of a Web-Based Adaptive Supplemental Digital Resource on Student Mathematics Performance
ERIC Educational Resources Information Center
Sharp, Laurie A.; Hamil, Marc
2018-01-01
Much literature has presented evidence that supplemental digital resources enhance student performance with mathematics. The purpose of this study was to explore the impact of a web-adaptive digital resource, Think Through Math©, on student performance with state-mandated annual standardized mathematics assessments. This study utilized a…
Meeting Reference Responsibilities through Library Web Sites.
ERIC Educational Resources Information Center
Adams, Michael
2001-01-01
Discusses library Web sites and explains some of the benefits when libraries make their sites into reference portals, linking them to other useful Web sites. Topics include print versus Web information sources; limitations of search engines; what Web sites to include, including criteria for inclusions; and organizing the sites. (LRW)
Extracting Macroscopic Information from Web Links.
ERIC Educational Resources Information Center
Thelwall, Mike
2001-01-01
Discussion of Web-based link analysis focuses on an evaluation of Ingversen's proposed external Web Impact Factor for the original use of the Web, namely the interlinking of academic research. Studies relationships between academic hyperlinks and research activities for British universities and discusses the use of search engines for Web link…
Research on the optimization strategy of web search engine based on data mining
NASA Astrophysics Data System (ADS)
Chen, Ronghua
2018-04-01
With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.
Database support for adaptation to climate change: An assessment of web-based portals across scales.
Sanderson, Hans; Hilden, Mikael; Russel, Duncan; Dessai, Suraje
2016-10-01
The widely recognized increase in greenhouse gas emissions is necessitating adaptation to a changing climate, and policies are being developed and implemented worldwide, across sectors, and between government scales globally. The aim of this article is to reflect on one of the major challenges: facilitating and sharing information on the next adaptation practices. Web portals (i.e., web sites) for disseminating information are important tools in meeting this challenge, and therefore, we assessed the characteristics of select major portals across multiple scales. We found that there is a rather limited number of case studies available in the portals-between 900 and 1000 in total-with 95 that include cost information and 195 that include the participation of stakeholders globally. Portals are rarely cited by researchers, suggesting a suboptimal connection between the practical, policy-related, and scientific development of adaptation. The government portals often lack links on search results between US and European Union (EU) web sites, for example. With significant investments and policy development emerging in both the United States and the European Union, there is great potential to share information via portals. Moreover, there is the possibility of better connecting the practical adaptation experience from bottom-up projects to the science of adaptation. Integr Environ Assess Manag 2016;12:627-631. © 2016 SETAC. © 2016 SETAC.
An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.
Zweigenbaum, P.; Darmoni, S. J.; Grabar, N.; Douyère, M.; Benichou, J.
2002-01-01
Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF. PMID:12463965
Faculty Collaboration on Multidisciplinary Web-Based Education.
ERIC Educational Resources Information Center
Saad, Ashraf; Uskov, Vladimir L.; Cedercreutz, Kettil; Geonetta, Sam; Spille, Jack; Abel, Dick
In 1998, faculty members at the University of Cincinnati started a project as an interdepartmental collaboration to investigate the use of World Wide Web-based instructional (WBI) tools. The project team included representatives from various areas such as information engineering technology, mechanical engineering technology, chemical technology,…
Working without a Crystal Ball: Predicting Web Trends for Web Services Librarians
ERIC Educational Resources Information Center
Ovadia, Steven
2008-01-01
User-centered design is a principle stating that electronic resources, like library Web sites, should be built around the needs of the users. This article interviews Web developers of library and non-library-related Web sites, determining how they assess user needs and how they decide to adapt certain technologies for users. According to the…
Design and Evaluation of an Open Web Platform Cartography Lab Curriculum
ERIC Educational Resources Information Center
Sack, Carl M.; Roth, Robert E.
2017-01-01
Recent shifts in web map technology away from proprietary software and toward development on the Open Web Platform have increased the number and complexity of technical skills needed to do cartography on the Web. Web-based cartography curricula likewise must be adapted to prepare geography, cartography, and GIS students with the skills needed to…
ERIC Educational Resources Information Center
Williamson, Jeanine M.; Han, Lee D.; Colon-Aguirre, Monica
2009-01-01
The study examined the extent of cross-disciplinarity in nanotechnology and transportation engineering research. Researchers in these two fields were determined from the web sites of the U.S. News and World Report top 100 schools in civil engineering and materials science. Web of Science searches for 2006 and 2007 articles were obtained and the…
Adaptations in a hierarchical food web of southeastern Lake Michigan
Krause, Ann E.; Frank, Ken A.; Jones, Michael L.; Nalepa, Thomas F.; Barbiero, Richard P.; Madenjian, Charles P.; Agy, Megan; Evans, Marlene S.; Taylor, William W.; Mason, Doran M.; Léonard, Nancy J.
2009-01-01
Two issues in ecological network theory are: (1) how to construct an ecological network model and (2) how do entire networks (as opposed to individual species) adapt to changing conditions? We present a novel method for constructing an ecological network model for the food web of southeastern Lake Michigan (USA) and we identify changes in key system properties that are large relative to their uncertainty as this ecological network adapts from one time point to a second time point in response to multiple perturbations. To construct our food web for southeastern Lake Michigan, we followed the list of seven recommendations outlined in Cohen et al. [Cohen, J.E., et al., 1993. Improving food webs. Ecology 74, 252–258] for improving food webs. We explored two inter-related extensions of hierarchical system theory with our food web; the first one was that subsystems react to perturbations independently in the short-term and the second one was that a system's properties change at a slower rate than its subsystems’ properties. We used Shannon's equations to provide quantitative versions of the basic food web properties: number of prey, number of predators, number of feeding links, and connectance (or density). We then compared these properties between the two time-periods by developing distributions of each property for each time period that took uncertainty about the property into account. We compared these distributions, and concluded that non-overlapping distributions indicated changes in these properties that were large relative to their uncertainty. Two subsystems were identified within our food web system structure (p < 0.001). One subsystem had more non-overlapping distributions in food web properties between Time 1 and Time 2 than the other subsystem. The overall system had all overlapping distributions in food web properties between Time 1 and Time 2. These results supported both extensions of hierarchical systems theory. Interestingly, the subsystem with more non-overlapping distributions in food web properties was the subsystem that contained primarily benthic taxa, contrary to expectations that the identified major perturbations (lower phosphorous inputs and invasive species) would more greatly affect the subsystem containing primarily pelagic taxa. Future food-web research should employ rigorous statistical analysis and incorporate uncertainty in food web properties for a better understanding of how ecological networks adapt.
Autonomous Satellite Command and Control through the World Wide Web: Phase 3
NASA Technical Reports Server (NTRS)
Cantwell, Brian; Twiggs, Robert
1998-01-01
NASA's New Millenium Program (NMP) has identified a variety of revolutionary technologies that will support orders of magnitude improvements in the capabilities of spacecraft missions. This program's Autonomy team has focused on science and engineering automation technologies. In doing so, it has established a clear development roadmap specifying the experiments and demonstrations required to mature these technologies. The primary developmental thrusts of this roadmap are in the areas of remote agents, PI/operator interface, planning/scheduling fault management, and smart execution architectures. Phases 1 and 2 of the ASSET Project (previously known as the WebSat project) have focused on establishing World Wide Web-based commanding and telemetry services as an advanced means of interfacing a spacecraft system with the PI and operators. Current automated capabilities include Web-based command submission, limited contact scheduling, command list generation and transfer to the ground station, spacecraft support for demonstrations experiments, data transfer from the ground station back to the ASSET system, data archiving, and Web-based telemetry distribution. Phase 2 was finished in December 1996. During January-December 1997 work was commenced on Phase 3 of the ASSET Project. Phase 3 is the subject of this report. This phase permitted SSDL and its project partners to expand the ASSET system in a variety of ways. These added capabilities included the advancement of ground station capabilities, the adaptation of spacecraft on-board software, and the expansion of capabilities of the ASSET management algorithms. Specific goals of Phase 3 were: (1) Extend Web-based goal-level commanding for both the payload PI and the spacecraft engineer; (2) Support prioritized handling of multiple PIs as well as associated payload experimenters; (3) Expand the number and types of experiments supported by the ASSET system and its associated spacecraft; (4) Implement more advanced resource management, modeling and fault management capabilities that integrate the space and ground segments of the space system hardware; (5) Implement a beacon monitoring test; (6) Implement an experimental blackboard controller for space system management; (7) Further define typical ground station developments required for Internet-based remote control and for full system automation of the PI-to-spacecraft link. Each of those goals is examined in the next section. Significant sections of this report were also published as a conference paper.
49 CFR 571.5 - Matter incorporated by reference
Code of Federal Regulations, 2010 CFR
2010-10-01
... Printing Office, Washington, DC 20402 Illuminating Engineering Society of North America (IES), 120 Wall St.... Phone: 1-800-232-4636; Web: http://www.cdc.gov/nchs “Weight, Height, and Selected Body Dimensions of..., Pennsylvania 15096. Phone: 1-724-776-4841; Web: http://www.sae.org Society of Automotive Engineers (SAE...
Getting Answers to Natural Language Questions on the Web.
ERIC Educational Resources Information Center
Radev, Dragomir R.; Libner, Kelsey; Fan, Weiguo
2002-01-01
Describes a study that investigated the use of natural language questions on Web search engines. Highlights include query languages; differences in search engine syntax; and results of logistic regression and analysis of variance that showed aspects of questions that predicted significantly different performances, including the number of words,…
"Just the Answers, Please": Choosing a Web Search Service.
ERIC Educational Resources Information Center
Feldman, Susan
1997-01-01
Presents guidelines for selecting World Wide Web search engines. Real-life questions were used to test six search engines. Queries sought company information, product reviews, medical information, foreign information, technical reports, and current events. Compares performance and features of AltaVista, Excite, HotBot, Infoseek, Lycos, and Open…
A Search Engine Features Comparison.
ERIC Educational Resources Information Center
Vorndran, Gerald
Until recently, the World Wide Web (WWW) public access search engines have not included many of the advanced commands, options, and features commonly available with the for-profit online database user interfaces, such as DIALOG. This study evaluates the features and characteristics common to both types of search interfaces, examines the Web search…
ERIC Educational Resources Information Center
Gupta, Amardeep
2005-01-01
Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…
Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba
2013-02-01
Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.
What Can the Semantic Web Do for Adaptive Educational Hypermedia?
ERIC Educational Resources Information Center
Cristea, Alexandra I.
2004-01-01
Semantic Web and Adaptive Hypermedia come from different backgrounds, but it turns out that actually, they can benefit from each other, and that their confluence can lead to synergistic effects. This encounter can influence several fields, among which an important one is Education. This paper presents an analysis of this encounter, first from a…
Results from a Web Impact Factor Crawler.
ERIC Educational Resources Information Center
Thelwall, Mike
2001-01-01
Discusses Web impact factors (WIFs), Web versions of the impact factors for journals, and how they can be calculated by using search engines. Highlights include HTML and document indexing; Web page links; a Web crawler designed for calculating WIFs; and WIFs for United Kingdom universities that measured research profiles or capability. (Author/LRW)
Surfing the World Wide Web to Education Hot-Spots.
ERIC Educational Resources Information Center
Dyrli, Odvard Egil
1995-01-01
Provides a brief explanation of Web browsers and their use, as well as technical information for those considering access to the WWW (World Wide Web). Curriculum resources and addresses to useful Web sites are included. Sidebars show sample searches using Yahoo and Lycos search engines, and a list of recommended Web resources. (JKP)
Galbusera, Fabio; Brayda-Bruno, Marco; Freutel, Maren; Seitz, Andreas; Steiner, Malte; Wehrle, Esther; Wilke, Hans-Joachim
2012-01-01
Previous surveys showed a poor quality of the web sites providing health information about low back pain. However, the rapid and continuous evolution of the Internet content may question the current validity of those investigations. The present study is aimed to quantitatively assess the quality of the Internet information about low back pain retrieved with the most commonly employed search engines. An Internet search with the keywords "low back pain" has been performed with Google, Yahoo!® and Bing™ in the English language. The top 30 hits obtained with each search engine were evaluated by five independent raters and averaged following criteria derived from previous works. All search results were categorized as declaring compliant to a quality standard for health information (e.g. HONCode) or not and based on the web site type (Institutional, Free informative, Commercial, News, Social Network, Unknown). The quality of the hits retrieved by the three search engines was extremely similar. The web sites had a clear purpose, were easy to navigate, and mostly lacked in validity and quality of the provided links. The conformity to a quality standard was correlated with a marked greater quality of the web sites in all respects. Institutional web sites had the best validity and ease of use. Free informative web sites had good quality but a markedly lower validity compared to Institutional websites. Commercial web sites provided more biased information. News web sites were well designed and easy to use, but lacked in validity. The average quality of the hits retrieved by the most commonly employed search engines could be defined as satisfactory and favorably comparable with previous investigations. Awareness of the user about checking the quality of the information remains of concern.
ACHP | Federal Historic Preservation Web Sites
Historic Preservation Web Sites Federal Historic Preservation Web Sites Historic American Buildings Survey /Historic American Engineering Record/Historic American Landscapes Survey lcweb2.loc.gov/ammem/hhhtml
Allahverdyan, A E; Babajanyan, S G; Martirosyan, N H; Melkikh, A V
2016-07-15
A major limitation of many heat engines is that their functioning demands on-line control and/or an external fitting between the environmental parameters (e.g., temperatures of thermal baths) and internal parameters of the engine. We study a model for an adaptive heat engine, where-due to feedback from the functional part-the engine's structure adapts to given thermal baths. Hence, no on-line control and no external fitting are needed. The engine can employ unknown resources; it can also adapt to results of its own functioning that make the bath temperatures closer. We determine resources of adaptation and relate them to the prior information available about the environment.
FindZebra: a search engine for rare diseases.
Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole
2013-06-01
The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Arabshahi, P.; Chao, Y.; Chien, S.; Gray, A.; Howe, B. M.; Roy, S.
2008-12-01
In many areas of Earth science, including climate change research, there is a need for near real-time integration of data from heterogeneous and spatially distributed sensors, in particular in-situ and space- based sensors. The data integration, as provided by a smart sensor web, enables numerous improvements, namely, 1) adaptive sampling for more efficient use of expensive space-based sensing assets, 2) higher fidelity information gathering from data sources through integration of complementary data sets, and 3) improved sensor calibration. The specific purpose of the smart sensor web development presented here is to provide for adaptive sampling and calibration of space-based data via in-situ data. Our ocean-observing smart sensor web presented herein is composed of both mobile and fixed underwater in-situ ocean sensing assets and Earth Observing System (EOS) satellite sensors providing larger-scale sensing. An acoustic communications network forms a critical link in the web between the in-situ and space-based sensors and facilitates adaptive sampling and calibration. After an overview of primary design challenges, we report on the development of various elements of the smart sensor web. These include (a) a cable-connected mooring system with a profiler under real-time control with inductive battery charging; (b) a glider with integrated acoustic communications and broadband receiving capability; (c) satellite sensor elements; (d) an integrated acoustic navigation and communication network; and (e) a predictive model via the Regional Ocean Modeling System (ROMS). Results from field experiments, including an upcoming one in Monterey Bay (October 2008) using live data from NASA's EO-1 mission in a semi closed-loop system, together with ocean models from ROMS, are described. Plans for future adaptive sampling demonstrations using the smart sensor web are also presented.
Web Formation - Skylab Student Experiment ED-52
NASA Technical Reports Server (NTRS)
1973-01-01
Judith S. Miles of Lexington High School, Lexington, Massachusetts, proposed skylab student experiment ED-52, Web Formation. This experiment was a study of a spider's behavior in a weightless environment. The geometrical structure of the web of the orb-weaving spider provides a good measure of the condition of its central nervous system. Since the spider senses its own weight to determine the required thickness of web material and uses both the wind and gravity to initiate construction of its web, the lack of gravitational force in Skylab provided a new and different stimulus to the spider's behavioral response. Two common cross spiders, Arabella and Anita, were used for the experiment aboard the Skylab-3 mission. After initial disoriented attempts, both spiders produced almost Earth-like webs once they had adapted to weightlessness. This photograph is of Arabella, a cross spider, in her initial attempt at spirning a web. This picture was taken by the crew of the Skylab 3 mission before Arabella adapted to her new environment.
A review of the reporting of web searching to identify studies for Cochrane systematic reviews.
Briscoe, Simon
2018-03-01
The literature searches that are used to identify studies for inclusion in a systematic review should be comprehensively reported. This ensures that the literature searches are transparent and reproducible, which is important for assessing the strengths and weaknesses of a systematic review and re-running the literature searches when conducting an update review. Web searching using search engines and the websites of topically relevant organisations is sometimes used as a supplementary literature search method. Previous research has shown that the reporting of web searching in systematic reviews often lacks important details and is thus not transparent or reproducible. Useful details to report about web searching include the name of the search engine or website, the URL, the date searched, the search strategy, and the number of results. This study reviews the reporting of web searching to identify studies for Cochrane systematic reviews published in the 6-month period August 2016 to January 2017 (n = 423). Of these reviews, 61 reviews reported using web searching using a search engine or website as a literature search method. In the majority of reviews, the reporting of web searching was found to lack essential detail for ensuring transparency and reproducibility, such as the search terms. Recommendations are made on how to improve the reporting of web searching in Cochrane systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.
Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers
Alsaleh, Mansour; Alarifi, Abdulrahman
2016-01-01
Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents. PMID:27855179
Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.
Alsaleh, Mansour; Alarifi, Abdulrahman
2016-01-01
Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.
Adams, Audrey; Timmins, Fiona
2006-01-01
This paper describes students' experiences of a Web-based innovation at one university. This paper reports on the first phase of this development where two Web-based modules were developed. Using a survey approach (n=44) students' access to and use of computer technology were explored. Findings revealed that students' prior use of computers and Internet technologies was higher than previously reported, although use of databases was low. Skills in this area increased during the programme, with a significant rise in database, email, search engine and word processing use. Many specific computer skills were learned during the programme, with high numbers reporting ability to deal adequately with files and folders. Overall, the experience was a positive one for students. While a sense of student isolation was not reported, as many students kept in touch by phone and class attendance continued, some individual students did appear to isolate themselves. This teaching methodology has much to offer in the provision of convenient easy to access programmes that can be easily adapted to the individual lifestyle. However, student support mechanisms need careful consideration for students who are at risk of becoming isolated. Staff also need to supported in the provision of this methodology and face-to-face contact with teachers for some part of the programme is preferable.
Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model
NASA Astrophysics Data System (ADS)
Kim, Sangjo; Kim, Kuisoon; Son, Changmin
2018-04-01
An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.
Optimal Area Use in Orb Webs of the Spider Araneus diadematus
NASA Astrophysics Data System (ADS)
Krink, T.; Vollrath, F.
We studied the abilities of the garden cross spider Araneus diadematus regarding adaptation of web geometry to spatial constraints. Spiders reacted to a spatial reduction in their building site from a square-shaped frame to a slimmer, rectangular frame (side ratio 1 : 2) by maintaining overall web geometry while reducing the web area covered by the sticky capture spiral. However, when the frames were changed further to a rectangular side ratio of 1 : 3, the spiders changed specific web properties in such a way that a further reduction in the capture spiral area was prevented. Construction of the threads making up the web frame and the auxiliary spiral requires that the spider explores the spatial constraints of its building site. The geometry of both frame and auxiliary spiral threads in turn determine the geometry of the capture threads. Since in very narrow frames the spider adjusted the auxiliary to suit the subsequent capture spiral, we suggest that an initial spatial survey led to the final adaptation of overall web geometry to a web site.
Optimal area use in orb webs of the spider Araneus diadematus.
Krink, T; Vollrath, F
2000-02-01
We studied the abilities of the garden cross spider Araneus diadematus regarding adaptation of web geometry to spatial constraints. Spiders reacted to a spatial reduction in their building site from a square-shaped frame to a slimmer, rectangular frame (side ratio 1 : 2) by maintaining overall web geometry while reducing the web area covered by the sticky capture spiral. However, when the frames were changed further to a rectangular side ratio of 1 : 3, the spiders changed specific web properties in such a way that a further reduction in the capture spiral area was prevented. Construction of the threads making up the web frame and the auxiliary spiral requires that the spider explores the spatial constraints of its building site. The geometry of both frame and auxiliary spiral threads in turn determine the geometry of the capture threads. Since in very narrow frames the spider adjusted the auxiliary to suit the subsequent capture spiral, we suggest that an initial spatial survey led to the final adaptation of overall web geometry to a web site.
NASA Astrophysics Data System (ADS)
Colomo-Palacios, Ricardo; Jiménez-López, Diego; García-Crespo, Ángel; Blanco-Iglesias, Borja
eLearning educative processes are a challenge for educative institutions and education professionals. In an environment in which learning resources are being produced, catalogued and stored using innovative ways, SOLE provides a platform in which exam questions can be produced supported by Web 2.0 tools, catalogued and labeled via semantic web and stored and distributed using eLearning standards. This paper presents, SOLE, a social network of exam questions sharing particularized for Software Engineering domain, based on semantics and built using semantic web and eLearning standards, such as IMS Question and Test Interoperability specification 2.1.
Fuel quantity modulation in pilot ignited engines
May, Andrew
2006-05-16
An engine system includes a first fuel regulator adapted to control an amount of a first fuel supplied to the engine, a second fuel regulator adapted to control an amount of a second fuel supplied to the engine concurrently with the first fuel being supplied to the engine, and a controller coupled to at least the second fuel regulator. The controller is adapted to determine the amount of the second fuel supplied to the engine in a relationship to the amount of the first fuel supplied to the engine to operate in igniting the first fuel at a specified time in steady state engine operation and adapted to determine the amount of the second fuel supplied to the engine in a manner different from the relationship at steady state engine operation in transient engine operation.
The Web Resource Collaboration Center
ERIC Educational Resources Information Center
Dunlap, Joanna C.
2004-01-01
The Web Resource Collaboration Center (WRCC) is a web-based tool developed to help software engineers build their own web-based learning and performance support systems. Designed using various online communication and collaboration technologies, the WRCC enables people to: (1) build a learning and professional development resource that provides…
Multimedia Web Searching Trends.
ERIC Educational Resources Information Center
Ozmutlu, Seda; Spink, Amanda; Ozmutlu, H. Cenk
2002-01-01
Examines and compares multimedia Web searching by Excite and FAST search engine users in 2001. Highlights include audio and video queries; time spent on searches; terms per query; ranking of the most frequently used terms; and differences in Web search behaviors of U.S. and European Web users. (Author/LRW)
Web-Based Simulation Games for the Integration of Engineering and Business Fundamentals
ERIC Educational Resources Information Center
Calfa, Bruno; Banholzer, William; Alger, Monty; Doherty, Michael
2017-01-01
This paper describes a web-based suite of simulation games that have the purpose to enhance the chemical engineering curriculum with business-oriented decisions. Two simulation cases are discussed whose teaching topics include closing material and energy balances, importance of recycle streams, price-volume relationship in a dynamic market, impact…
Search Engines: A Primer on Finding Information on the World Wide Web.
ERIC Educational Resources Information Center
Maddux, Cleborne
1996-01-01
Presents an annotated list of several World Wide Web search engines, including Yahoo, Infoseek, Alta Vista, Magellan, Lycos, Webcrawler, Excite, Deja News, and the LISZT Directory of discussion groups. Uniform Resource Locators (URLs) are included. Discussion assesses performance and describes rules and syntax for refining or limiting a search.…
Just-in-Time Web Searches for Trainers & Adult Educators.
ERIC Educational Resources Information Center
Kirk, James J.
Trainers and adult educators often need to quickly locate quality information on the World Wide Web (WWW) and need assistance in searching for such information. A "search engine" is an application used to query existing information on the WWW. The three types of search engines are computer-generated indexes, directories, and meta search…
The Advancement in Using Remote Laboratories in Electrical Engineering Education: A Review
ERIC Educational Resources Information Center
Almarshoud, A. F.
2011-01-01
The rapid development in Internet technology and its big popularity has led some universities around the world to incorporate web-based learning in some of their programmes. The present paper introduces a comprehensive survey of the publications about using remote laboratories in electrical engineering education. Remote laboratories are web-based,…
Use of an Academic Library Web Site Search Engine.
ERIC Educational Resources Information Center
Fagan, Jody Condit
2002-01-01
Describes an analysis of the search engine logs of Southern Illinois University, Carbondale's library to determine how patrons used the site search. Discusses results that showed patrons did not understand the function of the search and explains improvements that were made in the Web site and in online reference services. (Author/LRW)
MetaSpider: Meta-Searching and Categorization on the Web.
ERIC Educational Resources Information Center
Chen, Hsinchun; Fan, Haiyan; Chau, Michael; Zeng, Daniel
2001-01-01
Discusses the difficulty of locating relevant information on the Web and studies two approaches to addressing the low precision and poor presentation of search results: meta-search and document categorization. Introduces MetaSpider, a meta-search engine, and presents results of a user evaluation study that compared three search engines.…
GeoSearcher: Location-Based Ranking of Search Engine Results.
ERIC Educational Resources Information Center
Watters, Carolyn; Amoudi, Ghada
2003-01-01
Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…
How Public Is the Web?: Robots, Access, and Scholarly Communication.
ERIC Educational Resources Information Center
Snyder, Herbert; Rosenbaum, Howard
1998-01-01
Examines the use of Robot Exclusion Protocol (REP) to restrict the access of search engine robots to 10 major United States university Web sites. An analysis of Web site searching and interviews with Web server administrators shows that the decision to use this procedure is largely technical and is typically made by the Web server administrator.…
ERIC Educational Resources Information Center
Larson, Ray R.
1996-01-01
Examines the bibliometrics of the World Wide Web based on analysis of Web pages collected by the Inktomi "Web Crawler" and on the use of the DEC AltaVista search engine for cocitation analysis of a set of Earth Science related Web sites. Looks at the statistical characteristics of Web documents and their hypertext links, and the…
ERIC Educational Resources Information Center
Polat, Elif; Adiguzel, Tufan; Akgun, Ozcan Erkan
2012-01-01
Because there is, currently, no education system for primary school students in grades 1-3 who have specific learning disabilities in Turkey and because such students do not receive sufficient support from face-to-face counseling, a needs analysis was conducted in order to prepare an adaptive, web-assisted learning system according to variables…
ERIC Educational Resources Information Center
Raeder, Aggi
1997-01-01
Discussion of ways to promote sites on the World Wide Web focuses on how search engines work and how they retrieve and identify sites. Appropriate Web links for submitting new sites and for Internet marketing are included. (LRW)
Bennett, Antonia V; Keenoy, Kathleen; Shouery, Marwan; Basch, Ethan; Temple, Larissa K
2016-05-01
To assess the equivalence of patient-reported outcome (PRO) survey responses across Web, interactive voice response system (IVRS), and paper modes of administration. Postoperative colorectal cancer patients with home Web/e-mail and phone were randomly assigned to one of the eight study groups: Groups 1-6 completed the survey via Web, IVRS, and paper, in one of the six possible orders; Groups 7-8 completed the survey twice, either by Web or by IVRS. The 20-item survey, including the MSKCC Bowel Function Instrument (BFI), the LASA Quality of Life (QOL) scale, and the Subjective Significance Questionnaire (SSQ) adapted to bowel function, was completed from home on consecutive days. Mode equivalence was assessed by comparison of mean scores across modes and intraclass correlation coefficients (ICCs) and was compared to the test-retest reliability of Web and IVRS. Of 170 patients, 157 completed at least one survey and were included in analysis. Patients had mean age 56 (SD = 11), 53% were male, 81% white, 53% colon, and 47% rectal cancer; 78% completed all assigned surveys. Mean scores for BFI total score, BFI subscale scores, LASA QOL, and adapted SSQ varied by mode by less than one-third of a score point. ICCs across mode were: BFI total score (Web-paper = 0.96, Web-IVRS = 0.97, paper-IVRS = 0.97); BFI subscales (range = 0.88-0.98); LASA QOL (Web-paper = 0.98, Web-IVRS = 0.78, paper-IVRS = 0.80); and SSQ (Web-paper = 0.92, Web-IVRS = 0.86, paper-IVRS = 0.79). Mode equivalence was demonstrated for the BFI total score, BFI subscales, LASA QOL, and adapted SSQ, supporting the use of multiple modes of PRO data capture in clinical trials.
An Open Source Tool to Test Interoperability
NASA Astrophysics Data System (ADS)
Bermudez, L. E.
2012-12-01
Scientists interact with information at various levels from gathering of the raw observed data to accessing portrayed processed quality control data. Geoinformatics tools help scientist on the acquisition, storage, processing, dissemination and presentation of geospatial information. Most of the interactions occur in a distributed environment between software components that take the role of either client or server. The communication between components includes protocols, encodings of messages and managing of errors. Testing of these communication components is important to guarantee proper implementation of standards. The communication between clients and servers can be adhoc or follow standards. By following standards interoperability between components increase while reducing the time of developing new software. The Open Geospatial Consortium (OGC), not only coordinates the development of standards but also, within the Compliance Testing Program (CITE), provides a testing infrastructure to test clients and servers. The OGC Web-based Test Engine Facility, based on TEAM Engine, allows developers to test Web services and clients for correct implementation of OGC standards. TEAM Engine is a JAVA open source facility, available at Sourceforge that can be run via command line, deployed in a web servlet container or integrated in developer's environment via MAVEN. The TEAM Engine uses the Compliance Test Language (CTL) and TestNG to test HTTP requests, SOAP services and XML instances against Schemas and Schematron based assertions of any type of web service, not only OGC services. For example, the OGC Web Feature Service (WFS) 1.0.0 test has more than 400 test assertions. Some of these assertions includes conformance of HTTP responses, conformance of GML-encoded data; proper values for elements and attributes in the XML; and, correct error responses. This presentation will provide an overview of TEAM Engine, introduction of how to test via the OGC Testing web site and description of performing local tests. It will also provide information about how to participate in the open source code development of TEAM Engine.
Communicating climate change adaptation information using web-based platforms
NASA Astrophysics Data System (ADS)
Karali, Eleni; Mattern, Kati
2017-07-01
To facilitate progress in climate change adaptation policy and practice, it is important not only to ensure the production of accurate, comprehensive and relevant information, but also the easy, timely and affordable access to it. This can contribute to better-informed decisions and improve the design and implementation of adaptation policies and other relevant initiatives. Web-based platforms can play an important role in communicating and distributing data, information and knowledge that become constantly available, reaching out to a large group of potential users. Indeed in the last decade there has been an extensive increase in the number of platforms developed for this purpose in many fields including climate change adaptation. This short paper concentrates on the web-based adaptation platforms developed in Europe. It provides an overview of the recently emerged landscape, examines the basic characteristics of a set of platforms that operate at national, transnational and European level, and discusses some of the key challenges related to their development, maintenance and overall management. Findings presented in this short paper are discussed in greater detailed in the Technical Report of the European Environment Agency Overview of climate change adaptation platforms in Europe
.
Fleming Photo of Katherine Fleming Katherine Fleming Database and Web Applications Engineer and web application development in the Commercial Buildings Research group. Her projects include the , Katherine was pursuing a Ph.D. with a focus on robotics and working as a Web developer and Web accessibility
ERIC Educational Resources Information Center
McDermott, Irene E.
1999-01-01
Describes the development and current status of WebRing, a service that links related Web sites into a central hub. Discusses it as a viable alternative to other search engines and examines issues of free speech, use by the business sector, and implications for WebRing after its purchase by Yahoo! (LRW)
Method and system of measuring ultrasonic signals in the plane of a moving web
Hall, Maclin S.; Jackson, Theodore G.; Wink, Wilmer A.; Knerr, Christopher
1996-01-01
An improved system for measuring the velocity of ultrasonic signals within the plane of moving web-like materials, such as paper, paperboard and the like. In addition to velocity measurements of ultrasonic signals in the plane of the web in the machine direction, MD, and a cross direction, CD, generally perpendicular to the direction of the traveling web, therefor, one embodiment of the system in accordance with the present invention is also adapted to provide on-line indication of the polar specific stiffness of the moving web. In another embodiment of the invention, the velocity of ultrasonic signals in the plane of the web are measured by way of a plurality of ultrasonic transducers carried by synchronously driven wheels or cylinders, thus eliminating undue transducer wear due to any speed differences between the transducers and the web. In order to provide relatively constant contact force between the transducers and the webs, the transducers are mounted in a sensor housings which include a spring for biasing the transducer radially outwardly. The sensor housings are adapted to be easily and conveniently mounted to the carrier to provide a relatively constant contact force between the transducers and the moving web.
Method and system of measuring ultrasonic signals in the plane of a moving web
Hall, M.S.; Jackson, T.G.; Wink, W.A.; Knerr, C.
1996-02-27
An improved system for measuring the velocity of ultrasonic signals within the plane of moving web-like materials, such as paper, paperboard and the like is disclosed. In addition to velocity measurements of ultrasonic signals in the plane of the web in the machine direction, MD, and a cross direction, CD, generally perpendicular to the direction of the traveling web, therefore, one embodiment of the system in accordance with the present invention is also adapted to provide on-line indication of the polar specific stiffness of the moving web. In another embodiment of the invention, the velocity of ultrasonic signals in the plane of the web are measured by way of a plurality of ultrasonic transducers carried by synchronously driven wheels or cylinders, thus eliminating undue transducer wear due to any speed differences between the transducers and the web. In order to provide relatively constant contact force between the transducers and the webs, the transducers are mounted in a sensor housings which include a spring for biasing the transducer radially outwardly. The sensor housings are adapted to be easily and conveniently mounted to the carrier to provide a relatively constant contact force between the transducers and the moving web. 37 figs.
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T.; Prokosch, U.
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval. PMID:10566463
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T; Prokosch, U
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval.
MPA Portable: A Stand-Alone Software Package for Analyzing Metaproteome Samples on the Go.
Muth, Thilo; Kohrs, Fabian; Heyer, Robert; Benndorf, Dirk; Rapp, Erdmann; Reichl, Udo; Martens, Lennart; Renard, Bernhard Y
2018-01-02
Metaproteomics, the mass spectrometry-based analysis of proteins from multispecies samples faces severe challenges concerning data analysis and results interpretation. To overcome these shortcomings, we here introduce the MetaProteomeAnalyzer (MPA) Portable software. In contrast to the original server-based MPA application, this newly developed tool no longer requires computational expertise for installation and is now independent of any relational database system. In addition, MPA Portable now supports state-of-the-art database search engines and a convenient command line interface for high-performance data processing tasks. While search engine results can easily be combined to increase the protein identification yield, an additional two-step workflow is implemented to provide sufficient analysis resolution for further postprocessing steps, such as protein grouping as well as taxonomic and functional annotation. Our new application has been developed with a focus on intuitive usability, adherence to data standards, and adaptation to Web-based workflow platforms. The open source software package can be found at https://github.com/compomics/meta-proteome-analyzer .
ERIC Educational Resources Information Center
An, Lu; Qiu, Junping
2004-01-01
Journal impact factors (JIFs) as determined by the Institute for Scientific and Technological Information of China (ISTIC) of forty-two Chinese engineering journals were compared with external Web link counts, obtained from Lycos, and Web Impact Factors (WIFs) of corresponding journal Web sites to determine if any significant correlation existed…
Edelstein, Michael; Wallensten, Anders; Zetterqvist, Inga; Hulth, Anette
2014-01-01
Norovirus outbreaks severely disrupt healthcare systems. We evaluated whether Websök, an internet-based surveillance system using search engine data, improved norovirus surveillance and response in Sweden. We compared Websök users' characteristics with the general population, cross-correlated weekly Websök searches with laboratory notifications between 2006 and 2013, compared the time Websök and laboratory data crossed the epidemic threshold and surveyed infection control teams about their perception and use of Websök. Users of Websök were not representative of the general population. Websök correlated with laboratory data (b = 0.88-0.89) and gave an earlier signal to the onset of the norovirus season compared with laboratory-based surveillance. 17/21 (81%) infection control teams answered the survey, of which 11 (65%) believed Websök could help with infection control plans. Websök is a low-resource, easily replicable system that detects the norovirus season as reliably as laboratory data, but earlier. Using Websök in routine surveillance can help infection control teams prepare for the yearly norovirus season. PMID:24955857
Edelstein, Michael; Wallensten, Anders; Zetterqvist, Inga; Hulth, Anette
2014-01-01
Norovirus outbreaks severely disrupt healthcare systems. We evaluated whether Websök, an internet-based surveillance system using search engine data, improved norovirus surveillance and response in Sweden. We compared Websök users' characteristics with the general population, cross-correlated weekly Websök searches with laboratory notifications between 2006 and 2013, compared the time Websök and laboratory data crossed the epidemic threshold and surveyed infection control teams about their perception and use of Websök. Users of Websök were not representative of the general population. Websök correlated with laboratory data (b = 0.88-0.89) and gave an earlier signal to the onset of the norovirus season compared with laboratory-based surveillance. 17/21 (81%) infection control teams answered the survey, of which 11 (65%) believed Websök could help with infection control plans. Websök is a low-resource, easily replicable system that detects the norovirus season as reliably as laboratory data, but earlier. Using Websök in routine surveillance can help infection control teams prepare for the yearly norovirus season.
Adapting an in-person patient-caregiver communication intervention to a tailored web-based format.
Zulman, Donna M; Schafenacker, Ann; Barr, Kathryn L C; Moore, Ian T; Fisher, Jake; McCurdy, Kathryn; Derry, Holly A; Saunders, Edward W; An, Lawrence C; Northouse, Laurel
2012-03-01
Interventions that target cancer patients and their caregivers have been shown to improve patient-caregiver communication, support, and emotional well-being. To adapt an in-person communication intervention for cancer patients and caregivers to a web-based format, and to examine the usability and acceptability of the web-based program among representative users. A tailored, interactive web-based communication program for cancer patients and their family caregivers was developed based on an existing in-person, nurse-delivered intervention. The development process involved: (1) building a multidisciplinary team of content and web design experts, (2) combining key components of the in-person intervention with the unique tailoring and interactive features of a web-based platform, and (3) conducting focus groups and usability testing to obtain feedback from representative program users at multiple time points. Four focus groups with 2-3 patient-caregiver pairs per group (n = 22 total participants) and two iterations of usability testing with four patient-caregiver pairs per session (n = 16 total participants) were conducted. Response to the program's structure, design, and content was favorable, even among users who were older or had limited computer and Internet experience. The program received high ratings for ease of use and overall usability (mean System Usability Score of 89.5 out of 100). Many elements of a nurse-delivered patient-caregiver intervention can be successfully adapted to a web-based format. A multidisciplinary design team and an iterative evaluation process with representative users were instrumental in the development of a usable and well-received web-based program. Copyright © 2011 John Wiley & Sons, Ltd.
Zulman, Donna M.; Schafenacker, Ann; Barr, Kathryn L.C.; Moore, Ian T.; Fisher, Jake; McCurdy, Kathryn; Derry, Holly A.; Saunders, Edward W.; An, Lawrence C.; Northouse, Laurel
2011-01-01
Background Interventions that target cancer patients and their caregivers have been shown to improve communication, support, and emotional well-being. Objective To adapt an in-person communication intervention for cancer patients and caregivers to a web-based format, and to examine the usability and acceptability of the web-based program among representative users. Methods A tailored, interactive web-based communication program for cancer patients and their family caregivers was developed based on an existing in-person, nurse-delivered intervention. The development process involved: 1) building a multidisciplinary team of content and web design experts, 2) combining key components of the in-person intervention with the unique tailoring and interactive features of a web-based platform, and 3) conducting focus groups and usability testing to obtain feedback from representative program users at multiple time points. Results Four focus groups with 2 to 3 patient-caregiver pairs per group (n = 22 total participants) and two iterations of usability testing with 4 patient-caregiver pairs per session (n = 16 total participants) were conducted. Response to the program's structure, design, and content was favorable, even among users who were older or had limited computer and internet experience. The program received high ratings for ease of use and overall usability (mean System Usability Score of 89.5 out of 100). Conclusions Many elements of a nurse-delivered patient-caregiver intervention can be successfully adapted to a web-based format. A multidisciplinary design team and an iterative evaluation process with representative users were instrumental in the development of a usable and well-received web-based program. PMID:21830255
Effectiveness of off-line and web-based promotion of health information web sites.
Jones, Craig E; Pinnock, Carole B
2002-01-01
The relative effectiveness of off-line and web-based promotional activities in increasing the use of health information web sites by target audiences were compared. Visitor sessions were classified according to their method of arrival at the site (referral) as external web site, search engine, or "no referrer" (i.e., visitor arriving at the site by inputting URL or using bookmarks). The number of Australian visitor sessions correlated with no referrer referrals but not web site or search-engine referrals. Results showed that the targeted consumer group is more likely to access the web site as a result of off-line promotional activities. The properties of target audiences likely to influence the effectiveness of off-line versus on-line promotional strategies include the size of the Internet using population of the target audience, their proficiency in the use of the Internet, and the increase in effectiveness of off-line promotional activities when applied to locally defined target audiences.
Spatial Visualization Learning in Engineering: Traditional Methods vs. a Web-Based Tool
ERIC Educational Resources Information Center
Pedrosa, Carlos Melgosa; Barbero, Basilio Ramos; Miguel, Arturo Román
2014-01-01
This study compares an interactive learning manager for graphic engineering to develop spatial vision (ILMAGE_SV) to traditional methods. ILMAGE_SV is an asynchronous web-based learning tool that allows the manipulation of objects with a 3D viewer, self-evaluation, and continuous assessment. In addition, student learning may be monitored, which…
Creating adaptive web recommendation system based on user behavior
NASA Astrophysics Data System (ADS)
Walek, Bogdan
2018-01-01
The paper proposes adaptive web recommendation system based on user behavior. The proposed system uses expert system to evaluating and recommending suitable items of content. Relevant items are subsequently evaluated and filtered based on history of visited items and user´s preferred categories of items. Main parts of the proposed system are presented and described. The proposed recommendation system is verified on specific example.
Land use alters the resistance and resilience of soil food webs to drought
de Vries, Franciska T.; Liiri, Mira E.; Bjørnlund, Lisa; Bowker, Matthew A.; Christensen, Søren; Setälä, Heikki; Bardgett, Richard D.
2012-01-01
Soils deliver several ecosystem services including carbon sequestration and nutrient cycling, which are of central importance to climate mitigation and sustainable food production. Soil biota play an important role in carbon and nitrogen cycling, and, although the effects of land use on soil food webs are well documented the consequences for their resistance and resilience to climate change are not known. We compared the resistance and resilience to drought--which is predicted to increase under climate change of soil food webs of two common land-use systems: intensively managed wheat with a bacterial-based soil food web and extensively managed grassland with a fungal-based soil food web. We found that the fungal-based food web, and the processes of C and N loss it governs, of grassland soil was more resistant, although not resilient, and better able to adapt to drought than the bacterial-based food web of wheat soil. Structural equation modelling revealed that fungal-based soil food webs and greater microbial evenness mitigated C and N loss. Our findings show that land use strongly affects the resistance and resilience of soil food webs to climate change, and that extensively managed grassland promotes more resistant, and adaptable, fungal-based soil food webs.
Large orb-webs adapted to maximise total biomass not rare, large prey
Harmer, Aaron M. T.; Clausen, Philip D.; Wroe, Stephen; Madin, Joshua S.
2015-01-01
Spider orb-webs are the ultimate anti-ballistic devices, capable of dissipating the relatively massive kinetic energy of flying prey. Increased web size and prey stopping capacity have co-evolved in a number orb-web taxa, but the selective forces driving web size and performance increases are under debate. The rare, large prey hypothesis maintains that the energetic benefits of rare, very large prey are so much greater than the gains from smaller, more common prey that smaller prey are irrelevant for reproduction. Here, we integrate biophysical and ecological data and models to test a major prediction of the rare, large prey hypothesis, that selection should favour webs with increased stopping capacity and that large prey should comprise a significant proportion of prey stopped by a web. We find that larger webs indeed have a greater capacity to stop large prey. However, based on prey ecology, we also find that these large prey make up a tiny fraction of the total biomass (=energy) potentially captured. We conclude that large webs are adapted to stop more total biomass, and that the capacity to stop rare, but very large, prey is an incidental consequence of the longer radial silks that scale with web size. PMID:26374379
ERIC Educational Resources Information Center
Lagoze, Carl; Neylon, Eamonn; Mooney, Stephen; Warnick, Walter L.; Scott, R. L.; Spence, Karen J.; Johnson, Lorrie A.; Allen, Valerie S.; Lederman, Abe
2001-01-01
Includes four articles that discuss Dublin Core metadata, digital rights management and electronic books, including interoperability; and directed query engines, a type of search engine designed to access resources on the deep Web that is being used at the Department of Energy. (LRW)
NASA Astrophysics Data System (ADS)
Madiraju, Praveen; Zhang, Yanqing
2002-03-01
When a user logs in to a website, behind the scenes the user leaves his/her impressions, usage patterns and also access patterns in the web servers log file. A web usage mining agent can analyze these web logs to help web developers to improve the organization and presentation of their websites. They can help system administrators in improving the system performance. Web logs provide invaluable help in creating adaptive web sites and also in analyzing the network traffic analysis. This paper presents the design and implementation of a Web usage mining agent for digging in to the web log files.
"WGL," a Web Laboratory for Geometry
ERIC Educational Resources Information Center
Quaresma, Pedro; Santos, Vanda; Maric, Milena
2018-01-01
The role of information and communication technologies (ICT) in education is nowadays well recognised. The "Web Geometry Laboratory," is an e-learning, collaborative and adaptive, Web environment for geometry, integrating a well known dynamic geometry system. In a collaborative session, teachers and students, engaged in solving…
Developing Collections of Web-Published Materials
ERIC Educational Resources Information Center
Hsieh, Inga K.; Murray, Kathleen R.; Hartman, Cathy Nelson
2007-01-01
Librarians and archivists face challenges when adapting traditional collection development practices to meet the unique characteristics of Web-published materials. Likewise, preservation activities for Web-published materials must be undertaken at the outset of collection development lest they be lost forever. Standards and best practices for…
1973-01-01
Judith S. Miles of Lexington High School, Lexington, Massachusetts, proposed skylab student experiment ED-52, Web Formation. This experiment was a study of a spider's behavior in a weightless environment. The geometrical structure of the web of the orb-weaving spider provides a good measure of the condition of its central nervous system. Since the spider senses its own weight to determine the required thickness of web material and uses both the wind and gravity to initiate construction of its web, the lack of gravitational force in Skylab provided a new and different stimulus to the spider's behavioral response. Two common cross spiders, Arabella and Anita, were used for the experiment aboard the Skylab-3 mission. After initial disoriented attempts, both spiders produced almost Earth-like webs once they had adapted to weightlessness. This photograph is of Arabella, a cross spider, in her initial attempt at spirning a web. This picture was taken by the crew of the Skylab 3 mission before Arabella adapted to her new environment.
Noesis: Ontology based Scoped Search Engine and Resource Aggregator for Atmospheric Science
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Movva, S.; Li, X.; Cherukuri, P.; Graves, S.
2006-12-01
The goal for search engines is to return results that are both accurate and complete. The search engines should find only what you really want and find everything you really want. Search engines (even meta search engines) lack semantics. The basis for search is simply based on string matching between the user's query term and the resource database and the semantics associated with the search string is not captured. For example, if an atmospheric scientist is searching for "pressure" related web resources, most search engines return inaccurate results such as web resources related to blood pressure. In this presentation Noesis, which is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities will be described. Noesis uses domain ontologies to help the user scope the search query to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. Noesis also serves as a resource aggregator. It categorizes the search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user.
[Establishment of the database of the 3D facial models for the plastic surgery based on network].
Liu, Zhe; Zhang, Hai-Lin; Zhang, Zheng-Guo; Qiao, Qun
2008-07-01
To collect the three-dimensional (3D) facial data of 30 facial deformity patients by the 3D scanner and establish a professional database based on Internet. It can be helpful for the clinical intervention. The primitive point data of face topography were collected by the 3D scanner. Then the 3D point cloud was edited by reverse engineering software to reconstruct the 3D model of the face. The database system was divided into three parts, including basic information, disease information and surgery information. The programming language of the web system is Java. The linkages between every table of the database are credibility. The query operation and the data mining are convenient. The users can visit the database via the Internet and use the image analysis system to observe the 3D facial models interactively. In this paper we presented a database and a web system adapt to the plastic surgery of human face. It can be used both in clinic and in basic research.
OnSight: Multi-platform Visualization of the Surface of Mars
NASA Astrophysics Data System (ADS)
Abercrombie, S. P.; Menzies, A.; Winter, A.; Clausen, M.; Duran, B.; Jorritsma, M.; Goddard, C.; Lidawer, A.
2017-12-01
A key challenge of planetary geology is to develop an understanding of an environment that humans cannot (yet) visit. Instead, scientists rely on visualizations created from images sent back by robotic explorers, such as the Curiosity Mars rover. OnSight is a multi-platform visualization tool that helps scientists and engineers to visualize the surface of Mars. Terrain visualization allows scientists to understand the scale and geometric relationships of the environment around the Curiosity rover, both for scientific understanding and for tactical consideration in safely operating the rover. OnSight includes a web-based 2D/3D visualization tool, as well as an immersive mixed reality visualization. In addition, OnSight offers a novel feature for communication among the science team. Using the multiuser feature of OnSight, scientists can meet virtually on Mars, to discuss geology in a shared spatial context. Combining web-based visualization with immersive visualization allows OnSight to leverage strengths of both platforms. This project demonstrates how 3D visualization can be adapted to either an immersive environment or a computer screen, and will discuss advantages and disadvantages of both platforms.
Recommendations for Benchmarking Web Site Usage among Academic Libraries.
ERIC Educational Resources Information Center
Hightower, Christy; Sih, Julie; Tilghman, Adam
1998-01-01
To help library directors and Web developers create a benchmarking program to compare statistics of academic Web sites, the authors analyzed the Web server log files of 14 university science and engineering libraries. Recommends a centralized voluntary reporting structure coordinated by the Association of Research Libraries (ARL) and a method for…
Cognitive and Task Influences on Web Searching Behavior.
ERIC Educational Resources Information Center
Kim, Kyung-Sun; Allen, Bryce
2002-01-01
Describes results from two independent investigations of college students that were conducted to study the impact of differences in users' cognition and search tasks on Web search activities and outcomes. Topics include cognitive style; problem-solving; and implications for the design and use of the Web and Web search engines. (Author/LRW)
Graph Structure in Three National Academic Webs: Power Laws with Anomalies.
ERIC Educational Resources Information Center
Thelwall, Mike; Wilkinson, David
2003-01-01
Explains how the Web can be modeled as a mathematical graph and analyzes the graph structures of three national university publicly indexable Web sites from Australia, New Zealand, and the United Kingdom. Topics include commercial search engines and academic Web link research; method-analysis environment and data sets; and power laws. (LRW)
A novel adaptive Cuckoo search for optimal query plan generation.
Gomathi, Ramalingam; Sharmila, Dhandapani
2014-01-01
The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.
Understanding the leaky engineering pipeline: Motivation and job adaptability of female engineers
NASA Astrophysics Data System (ADS)
Saraswathiamma, Manjusha Thekkedathu
This dissertation is a mixed-method study conducted using qualitative grounded theory and quantitative survey and correlation approaches. This study aims to explore the motivation and adaptability of females in the engineering profession and to develop a theoretical framework for both motivation and adaptability issues. As a result, this study endeavors to design solutions for the low enrollment and attenuation of female engineers in the engineering profession, often referred to as the "leaky female engineering pipeline." Profiles of 123 female engineers were studied for the qualitative approach, and 98 completed survey responses were analyzed for the quantitative approach. The qualitative, grounded-theory approach applied the constant comparison method; open, axial, and selective coding was used to classify the information in categories, sub-categories, and themes for both motivation and adaptability. The emergent themes for decisions motivating female enrollment include cognitive, emotional, and environmental factors. The themes identified for adaptability include the seven job adaptability factors: job satisfaction, risk- taking attitude, career/skill development, family, gender stereotyping, interpersonal skills, and personal benefit, as well as the self-perceived job adaptability factor. Illeris' Three-dimensional Learning Theory was modified as a model for decisions motivating female enrollment. This study suggests a firsthand conceptual parallelism of McClusky's Theory of Margin for the adaptability of female engineers in the profession. Also, this study attempted to design a survey instrument to measure job adaptability of female engineers. The study identifies two factors that are significantly related to job adaptability: interpersonal skills (< p = 0.01) and family (< p = 0.05); gender stereotyping and personal benefit are other factors that are also significantly (< p = 0.1) related.
Publications - PDF 99-24D | Alaska Division of Geological & Geophysical
Alaska's Mineral Industry Reports AKGeology.info Rare Earth Elements WebGeochem Engineering Geology Alaska ; Engineering; Engineering Geologic Map; Engineering Geology; Geologic Map; Geology; Land Subsidence; Landslide
Adding question answering to an e-tutor for programming languages
NASA Astrophysics Data System (ADS)
Taylor, Kate; Moore, Simon
Control over a closed domain of textual material removes many question answering issues, as does an ontology that is closely intertwined with its sources. This pragmatic, shallow approach to many challenging areas of research in adaptive hypermedia, question answering, intelligent tutoring and humancomputer interaction has been put into practice at Cambridge in the Computer Science undergraduate course to teach the hardware description language Veri/og. This language itself poses many challenges as it crosses the interdisciplinary boundary between hardware and software engineers, giving rise to severalhuman ontologies as well as theprogramming language itself We present further results from ourformal and informal surveys. We look at further work to increase the dialogue between studentand tutor and export our knowledge to the Semantic Web.
Reinventing a health sciences digital library--organizational impact.
Moore, Margaret E; Garrison, Scott; Hayes, Barrie; McLendon, Wallace
2003-01-01
What is the organizational impact of becoming a digital library, as well as a physical entity with facilities and collections? Is the digital library an add-on or an integrated component of the overall library package? Librarians see sweeping environmental and technological changes. The staff members feel exhilarated and challenged by the pressures to adapt quickly and effectively. Librarians recognize that a Web presence, like other technology components, must be continuously enhanced and regularly re-engineered. The Health Sciences Library, University of North Carolina at Chapel Hill, is reinventing its digital presence to better meet the needs of the community. This paper provides a case study focusing on major changes in planning processes, organizational structure, staffing, budgeting, training, communications, and operations at the Health Sciences Library.
[Biomedical information on the internet using search engines. A one-year trial].
Corrao, Salvatore; Leone, Francesco; Arnone, Sabrina
2004-01-01
The internet is a communication medium and content distributor that provide information in the general sense but it could be of great utility regarding as the search and retrieval of biomedical information. Search engines represent a great deal to rapidly find information on the net. However, we do not know whether general search engines and meta-search ones are reliable in order to find useful and validated biomedical information. The aim of our study was to verify the reproducibility of a search by key-words (pediatric or evidence) using 9 international search engines and 1 meta-search engine at the baseline and after a one year period. We analysed the first 20 citations as output of each searching. We evaluated the formal quality of Web-sites and their domain extensions. Moreover, we compared the output of each search at the start of this study and after a one year period and we considered as a criterion of reliability the number of Web-sites cited again. We found some interesting results that are reported throughout the text. Our findings point out an extreme dynamicity of the information on the Web and, for this reason, we advice a great caution when someone want to use search and meta-search engines as a tool for searching and retrieve reliable biomedical information. On the other hand, some search and meta-search engines could be very useful as a first step searching for defining better a search and, moreover, for finding institutional Web-sites too. This paper allows to know a more conscious approach to the internet biomedical information universe.
Klee, Kathrin; Ernst, Rebecca; Spannagl, Manuel; Mayer, Klaus F X
2007-08-30
Apollo, a genome annotation viewer and editor, has become a widely used genome annotation and visualization tool for distributed genome annotation projects. When using Apollo for annotation, database updates are carried out by uploading intermediate annotation files into the respective database. This non-direct database upload is laborious and evokes problems of data synchronicity. To overcome these limitations we extended the Apollo data adapter with a generic, configurable web service client that is able to retrieve annotation data in a GAME-XML-formatted string and pass it on to Apollo's internal input routine. This Apollo web service adapter, Apollo2Go, simplifies the data exchange in distributed projects and aims to render the annotation process more comfortable. The Apollo2Go software is freely available from ftp://ftpmips.gsf.de/plants/apollo_webservice.
Klee, Kathrin; Ernst, Rebecca; Spannagl, Manuel; Mayer, Klaus FX
2007-01-01
Background Apollo, a genome annotation viewer and editor, has become a widely used genome annotation and visualization tool for distributed genome annotation projects. When using Apollo for annotation, database updates are carried out by uploading intermediate annotation files into the respective database. This non-direct database upload is laborious and evokes problems of data synchronicity. Results To overcome these limitations we extended the Apollo data adapter with a generic, configurable web service client that is able to retrieve annotation data in a GAME-XML-formatted string and pass it on to Apollo's internal input routine. Conclusion This Apollo web service adapter, Apollo2Go, simplifies the data exchange in distributed projects and aims to render the annotation process more comfortable. The Apollo2Go software is freely available from . PMID:17760972
ERIC Educational Resources Information Center
Church, Jennifer; Felker, Kyle
2005-01-01
The dynamic world of the Web has provided libraries with a wealth of opportunities, including new approaches to the provision of information and varied internal staffing structures. The development of self-managed Web teams, endowed with authority and resources, can create an adaptable and responsive culture within libraries. This new working team…
van Stralen, Maartje M; Bolman, Catherine; Golsteijn, Rianne HJ; de Vries, Hein; Mudde, Aart N; Lechner, Lilian
2012-01-01
Background The Active Plus project is a systematically developed theory- and evidence-based, computer-tailored intervention, which was found to be effective in changing physical activity behavior in people aged over 50 years. The process and effect outcomes of the first version of the Active Plus project were translated into an adapted intervention using the RE-AIM framework. The RE-AIM model is often used to evaluate the potential public health impact of an intervention and distinguishes five dimensions: reach, effectiveness, adoption, implementation, and maintenance. Objective To gain insight into the systematic translation of the first print-delivered version of the Active Plus project into an adapted (Web-based) follow-up project. The focus of this study was on the reach and effectiveness dimensions, since these dimensions are most influenced by the results from the original Active Plus project. Methods We optimized the potential reach and effect of the interventions by extending the delivery mode of the print-delivered intervention into an additional Web-based intervention. The interventions were adapted based on results of the process evaluation, analyses of effects within subgroups, and evaluation of the working mechanisms of the original intervention. We pretested the new intervention materials and the Web-based versions of the interventions. Subsequently, the new intervention conditions were implemented in a clustered randomized controlled trial. Results Adaptations resulted in four improved tailoring interventions: (1) a basic print-delivered intervention, (2) a basic Web-based intervention, (3) a print-delivered intervention with an additional environmental component, and (4) a Web-based version with an additional environmental component. Pretest results with participants showed that all new intervention materials had modest usability and relatively high appreciation, and that filling in an online questionnaire and performing the online tasks was not problematic. We used the pretest results to improve the usability of the different interventions. Implementation of the new interventions in a clustered randomized controlled trial showed that the print-delivered interventions had a higher response rate than the Web-based interventions. Participants of both low and high socioeconomic status were reached by both print-delivered and Web-based interventions. Conclusions Translation of the (process) evaluation of an effective intervention into an adapted intervention is challenging and rarely reported. We discuss several major lessons learned from our experience. Trial Registration Nederlands Trial Register (NTR): 2297; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=2297 (Archived by WebCite at http://www.webcitation.org/65TkwoESp). PMID:22390878
Peels, Denise A; van Stralen, Maartje M; Bolman, Catherine; Golsteijn, Rianne Hj; de Vries, Hein; Mudde, Aart N; Lechner, Lilian
2012-03-02
The Active Plus project is a systematically developed theory- and evidence-based, computer-tailored intervention, which was found to be effective in changing physical activity behavior in people aged over 50 years. The process and effect outcomes of the first version of the Active Plus project were translated into an adapted intervention using the RE-AIM framework. The RE-AIM model is often used to evaluate the potential public health impact of an intervention and distinguishes five dimensions: reach, effectiveness, adoption, implementation, and maintenance. To gain insight into the systematic translation of the first print-delivered version of the Active Plus project into an adapted (Web-based) follow-up project. The focus of this study was on the reach and effectiveness dimensions, since these dimensions are most influenced by the results from the original Active Plus project. We optimized the potential reach and effect of the interventions by extending the delivery mode of the print-delivered intervention into an additional Web-based intervention. The interventions were adapted based on results of the process evaluation, analyses of effects within subgroups, and evaluation of the working mechanisms of the original intervention. We pretested the new intervention materials and the Web-based versions of the interventions. Subsequently, the new intervention conditions were implemented in a clustered randomized controlled trial. Adaptations resulted in four improved tailoring interventions: (1) a basic print-delivered intervention, (2) a basic Web-based intervention, (3) a print-delivered intervention with an additional environmental component, and (4) a Web-based version with an additional environmental component. Pretest results with participants showed that all new intervention materials had modest usability and relatively high appreciation, and that filling in an online questionnaire and performing the online tasks was not problematic. We used the pretest results to improve the usability of the different interventions. Implementation of the new interventions in a clustered randomized controlled trial showed that the print-delivered interventions had a higher response rate than the Web-based interventions. Participants of both low and high socioeconomic status were reached by both print-delivered and Web-based interventions. Translation of the (process) evaluation of an effective intervention into an adapted intervention is challenging and rarely reported. We discuss several major lessons learned from our experience. Nederlands Trial Register (NTR): 2297; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=2297 (Archived by WebCite at http://www.webcitation.org/65TkwoESp).
Introduction to Chemical Engineering Reactor Analysis: A Web-Based Reactor Design Game
ERIC Educational Resources Information Center
Orbey, Nese; Clay, Molly; Russell, T.W. Fraser
2014-01-01
An approach to explain chemical engineering through a Web-based interactive game design was developed and used with college freshman and junior/senior high school students. The goal of this approach was to demonstrate how to model a lab-scale experiment, and use the results to design and operate a chemical reactor. The game incorporates both…
What Major Search Engines Like Google, Yahoo and Bing Need to Know about Teachers in the UK?
ERIC Educational Resources Information Center
Seyedarabi, Faezeh
2014-01-01
This article briefly outlines the current major search engines' approach to teachers' web searching. The aim of this article is to make Web searching easier for teachers when searching for relevant online teaching materials, in general, and UK teacher practitioners at primary, secondary and post-compulsory levels, in particular. Therefore, major…
How To Do Field Searching in Web Search Engines: A Field Trip.
ERIC Educational Resources Information Center
Hock, Ran
1998-01-01
Describes the field search capabilities of selected Web search engines (AltaVista, HotBot, Infoseek, Lycos, Yahoo!) and includes a chart outlining what fields (date, title, URL, images, audio, video, links, page depth) are searchable, where to go on the page to search them, the syntax required (if any), and how field search queries are entered.…
Using the Internet in Career Education. Practice Application Brief No. 1.
ERIC Educational Resources Information Center
Wagner, Judith O.
The World Wide Web has a wealth of information on career planning, individual jobs, and job search methods that counselors and teachers can use. Search engines such as Yahoo! and Magellan, organized like library tools, and engines such as AltaVista and HotBot search words or phrases. Web indexes offer a variety of features. The criteria for…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-03
...'' field when using either the Web-based search (advanced search) engine or the ADAMS FIND tool in Citrix... should enter ``05200011'' in the ``Docket Number'' field in the web-based search (advanced search) engine... ML100740441. To search for documents in ADAMS using Vogtle Units 3 and 4 COL application docket numbers, 52...
Global polar geospatial information service retrieval based on search engine and ontology reasoning
Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang
2007-01-01
In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.
Intelligent Web-Based Learning System with Personalized Learning Path Guidance
ERIC Educational Resources Information Center
Chen, C. M.
2008-01-01
Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths…
Using Web-Based Practice to Enhance Mathematics Learning and Achievement
ERIC Educational Resources Information Center
Nguyen, Diem M.; Kulm, Gerald
2005-01-01
This article describes 1) the special features and accessibility of an innovative web-based practice instrument (WebMA) designed with randomized short-answer, matching and multiple choice items incorporated with automatically adapted feedback for middle school students; and 2) an exploratory study that compares the effects and contributions of…
Webquests for English-Language Learners: Essential Elements for Design
ERIC Educational Resources Information Center
Sox, Amanda; Rubinstein-Avila, Eliane
2009-01-01
The authors of this article advocate for the adaptation and use of WebQuests (web-based interdisciplinary collaborative learning units) to integrate technological competencies and content area knowledge development at the secondary level and to support the linguistic needs of English-language learners (ELLs). After examining eight WebQuests, the…
Scaffolding Preservice Teachers' WebQuest Design: A Qualitative Study
ERIC Educational Resources Information Center
Wang, Feng; Hannafin, Michael J.
2009-01-01
This study examined how participating preservice teachers reported their perceptions and use of different scaffolds provided to support their WebQuest design. Sixteen preservice teachers participated in a succession of course activities designed to guide WebQuest design and development. Results indicated that while participants followed, adapted,…
Web Geometry Laboratory: Case Studies in Portugal and Serbia
ERIC Educational Resources Information Center
Santos, Vanda; Quaresma, Pedro; Maric, Milena; Campos, Helena
2018-01-01
The role of information and communication technologies (ICT) in education is well recognised--learning environments where the ICT features included are being proposed for many years now. The Web Geometry Laboratory (WGL) innovates in proposing a blended learning, collaborative and adaptive learning Web-environment for geometry. It integrates a…
Computational Intelligence in Web-Based Education: A Tutorial
ERIC Educational Resources Information Center
Vasilakos, Thanos; Devedzic, Vladan; Kinshuk; Pedrycz, Witold
2004-01-01
This article discusses some important aspects of Web Intelligence (WI) in the context of educational applications. Some of the key components of WI have already attracted developers of web-based educational systems for quite some time- ontologies, adaptivity and personalization, and agents. The paper focuses on the application of Computational…
The charged particle accelerators subsystems modeling
NASA Astrophysics Data System (ADS)
Averyanov, G. P.; Kobylyatskiy, A. V.
2017-01-01
Presented web-based resource for information support the engineering, science and education in Electrophysics, containing web-based tools for simulation subsystems charged particle accelerators. Formulated the development motivation of Web-Environment for Virtual Electrophysical Laboratories. Analyzes the trends of designs the dynamic web-environments for supporting of scientific research and E-learning, within the framework of Open Education concept.
ERIC Educational Resources Information Center
Fast, Karl V.; Campbell, D. Grant
2001-01-01
Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…
Authoring of Adaptive Computer Assisted Assessment of Free-Text Answers
ERIC Educational Resources Information Center
Alfonseca, Enrique; Carro, Rosa M.; Freire, Manuel; Ortigosa, Alvaro; Perez, Diana; Rodriguez, Pilar
2005-01-01
Adaptation techniques can be applied not only to the multimedia contents or navigational possibilities of a course, but also to the assessment. In order to facilitate the authoring of adaptive free-text assessment and its integration within adaptive web-based courses, Adaptive Hypermedia techniques and Free-text Computer Assisted Assessment are…
Start Your Engines: Surfing with Search Engines for Kids.
ERIC Educational Resources Information Center
Byerly, Greg; Brodie, Carolyn S.
1999-01-01
Suggests that to be an effective educator and user of the Web it is essential to know the basics about search engines. Presents tips for using search engines. Describes several search engines for children and young adults, as well as some general filtered search engines for children. (AEF)
Waack, Katherine E; Ernst, Michael E; Graber, Mark A
2004-12-01
In the last 5 years, several treatments have become available for erectile dysfunction (ED). During this same period, consumer use of the Internet for health information has increased rapidly. In traditional direct-to-consumer advertisements, viewers are often referred to a pharmaceutical company Web site for further information. To evaluate the accessibility and informational content of 5 pharmaceutical company Web sites about ED treatments. Using 10 popular search engines and 1 specialized search engine, the accessibility of the official pharmaceutical company-sponsored Web site was determined by searching under brand and generic names. One company also manufactures an ED device; this site was also included. A structured, explicit review of information found on these sites was conducted. Of 110 searches (1 for each treatment, including corresponding generic drug name, using each search engine), 68 yielded the official pharmaceutical company Web site within the first 10 links. Removal of outliers (for both brand and generic name searches) resulted in 68 of 77 searches producing the pharmaceutical company Web site for the brand-name drug in the top 10 links. Although all pharmaceutical company Web sites contained general information on adverse effects and contraindications to use, only 2 sites gave actual percentages. Three sites provided references for their materials or discussed other treatment or drug options, while 4 of the sites contained profound advertising or emotive content. None mentioned cost of the therapy. The information contained on pharmaceutical company Web sites for ED treatments is superficial and aimed primarily at consumers. It is largely promotional and provides only limited information needed to effectively compare treatment options.
Reconsidering the Rhizome: A Textual Analysis of Web Search Engines as Gatekeepers of the Internet
NASA Astrophysics Data System (ADS)
Hess, A.
Critical theorists have often drawn from Deleuze and Guattari's notion of the rhizome when discussing the potential of the Internet. While the Internet may structurally appear as a rhizome, its day-to-day usage by millions via search engines precludes experiencing the random interconnectedness and potential democratizing function. Through a textual analysis of four search engines, I argue that Web searching has grown hierarchies, or "trees," that organize data in tracts of knowledge and place users in marketing niches rather than assist in the development of new knowledge.
Providing web-based mental health services to at-risk women
2011-01-01
Background We examined the feasibility of providing web-based mental health services, including synchronous internet video conferencing of an evidence-based support/education group, to at-risk women, specifically poor lone mothers. The objectives of this study were to: (i) adapt a face-to-face support/education group intervention to a web-based format for lone mothers, and (ii) evaluate lone mothers' response to web-based services, including an online video conferencing group intervention program. Methods Participating mothers were recruited through advertisements. To adapt the face-to-face intervention to a web-based format, we evaluated participant motivation through focus group/key informant interviews (n = 7), adapted the intervention training manual for a web-based environment and provided a computer training manual. To evaluate response to web-based services, we provided the intervention to two groups of lone mothers (n = 15). Pre-post quantitative evaluation of mood, self-esteem, social support and parenting was done. Post intervention follow up interviews explored responses to the group and to using technology to access a health service. Participants received $20 per occasion of data collection. Interviews were taped, transcribed and content analysis was used to code and interpret the data. Adherence to the intervention protocol was evaluated. Results Mothers participating in this project experienced multiple difficulties, including financial and mood problems. We adapted the intervention training manual for use in a web-based group environment and ensured adherence to the intervention protocol based on viewing videoconferencing group sessions and discussion with the leaders. Participant responses to the group intervention included decreased isolation, and increased knowledge and confidence in themselves and their parenting; the responses closely matched those of mothers who obtained same service in face-to-face groups. Pre-and post-group quantitative evaluations did not show significant improvements on measures, although the study was not powered to detect these. Conclusions We demonstrated that an evidence-based group intervention program for lone mothers developed and evaluated in face-to-face context transferred well to an online video conferencing format both in terms of group process and outcomes. PMID:21854563
Providing web-based mental health services to at-risk women.
Lipman, Ellen L; Kenny, Meghan; Marziali, Elsa
2011-08-19
We examined the feasibility of providing web-based mental health services, including synchronous internet video conferencing of an evidence-based support/education group, to at-risk women, specifically poor lone mothers. The objectives of this study were to: (i) adapt a face-to-face support/education group intervention to a web-based format for lone mothers, and (ii) evaluate lone mothers' response to web-based services, including an online video conferencing group intervention program. Participating mothers were recruited through advertisements. To adapt the face-to-face intervention to a web-based format, we evaluated participant motivation through focus group/key informant interviews (n = 7), adapted the intervention training manual for a web-based environment and provided a computer training manual. To evaluate response to web-based services, we provided the intervention to two groups of lone mothers (n = 15). Pre-post quantitative evaluation of mood, self-esteem, social support and parenting was done. Post intervention follow up interviews explored responses to the group and to using technology to access a health service. Participants received $20 per occasion of data collection. Interviews were taped, transcribed and content analysis was used to code and interpret the data. Adherence to the intervention protocol was evaluated. Mothers participating in this project experienced multiple difficulties, including financial and mood problems. We adapted the intervention training manual for use in a web-based group environment and ensured adherence to the intervention protocol based on viewing videoconferencing group sessions and discussion with the leaders. Participant responses to the group intervention included decreased isolation, and increased knowledge and confidence in themselves and their parenting; the responses closely matched those of mothers who obtained same service in face-to-face groups. Pre-and post-group quantitative evaluations did not show significant improvements on measures, although the study was not powered to detect these. We demonstrated that an evidence-based group intervention program for lone mothers developed and evaluated in face-to-face context transferred well to an online video conferencing format both in terms of group process and outcomes.
Artemis: Integrating Scientific Data on the Grid (Preprint)
2004-07-01
Theseus execution engine [Barish and Knoblock 03] to efficiently execute the generated datalog program. The Theseus execution engine has a wide...variety of operations to query databases, web sources, and web services. Theseus also contains a wide variety of relational operations, such as...selection, union, or projection. Furthermore, Theseus optimizes the execution of an integration plan by querying several data sources in parallel and
Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela
2011-06-01
The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.
Stakeholder Analysis for the CF Counter-IED Training Courses
2010-05-01
for more than purely research purposes when the experimenter is present. 3.1.3 Learning Style- based Adaptation The Index of Learning Styles (Felder...student. It is recommended that the Adaption Module uses the same ontology based reasoning approach as the Evaluation Module. RacerPro is the recommended...reasoner. RacerPro is used as a system for managing semantic web ontologies based on Web Ontology Language (OWL). The design phase will confirm
Finding Specification Pages from the Web
NASA Astrophysics Data System (ADS)
Yoshinaga, Naoki; Torisawa, Kentaro
This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.
NASA Astrophysics Data System (ADS)
Qin, Rufu; Lin, Liangzhao
2017-06-01
Coastal seiches have become an increasingly important issue in coastal science and present many challenges, particularly when attempting to provide warning services. This paper presents the methodologies, techniques and integrated services adopted for the design and implementation of a Seiches Monitoring and Forecasting Integration Framework (SMAF-IF). The SMAF-IF is an integrated system with different types of sensors and numerical models and incorporates the Geographic Information System (GIS) and web techniques, which focuses on coastal seiche events detection and early warning in the North Jiangsu shoal, China. The in situ sensors perform automatic and continuous monitoring of the marine environment status and the numerical models provide the meteorological and physical oceanographic parameter estimates. A model outputs processing software was developed in C# language using ArcGIS Engine functions, which provides the capabilities of automatically generating visualization maps and warning information. Leveraging the ArcGIS Flex API and ASP.NET web services, a web based GIS framework was designed to facilitate quasi real-time data access, interactive visualization and analysis, and provision of early warning services for end users. The integrated framework proposed in this study enables decision-makers and the publics to quickly response to emergency coastal seiche events and allows an easy adaptation to other regional and scientific domains related to real-time monitoring and forecasting.
Zebra: a web server for bioinformatic analysis of diverse protein families.
Suplatov, Dmitry; Kirilin, Evgeny; Takhaveev, Vakil; Svedas, Vytas
2014-01-01
During evolution of proteins from a common ancestor, one functional property can be preserved while others can vary leading to functional diversity. A systematic study of the corresponding adaptive mutations provides a key to one of the most challenging problems of modern structural biology - understanding the impact of amino acid substitutions on protein function. The subfamily-specific positions (SSPs) are conserved within functional subfamilies but are different between them and, therefore, seem to be responsible for functional diversity in protein superfamilies. Consequently, a corresponding method to perform the bioinformatic analysis of sequence and structural data has to be implemented in the common laboratory practice to study the structure-function relationship in proteins and develop novel protein engineering strategies. This paper describes Zebra web server - a powerful remote platform that implements a novel bioinformatic analysis algorithm to study diverse protein families. It is the first application that provides specificity determinants at different levels of functional classification, therefore addressing complex functional diversity of large superfamilies. Statistical analysis is implemented to automatically select a set of highly significant SSPs to be used as hotspots for directed evolution or rational design experiments and analyzed studying the structure-function relationship. Zebra results are provided in two ways - (1) as a single all-in-one parsable text file and (2) as PyMol sessions with structural representation of SSPs. Zebra web server is available at http://biokinet.belozersky.msu.ru/zebra .
MR-Tandem: parallel X!Tandem using Hadoop MapReduce on Amazon Web Services.
Pratt, Brian; Howbert, J Jeffry; Tasman, Natalie I; Nilsson, Erik J
2012-01-01
MR-Tandem adapts the popular X!Tandem peptide search engine to work with Hadoop MapReduce for reliable parallel execution of large searches. MR-Tandem runs on any Hadoop cluster but offers special support for Amazon Web Services for creating inexpensive on-demand Hadoop clusters, enabling search volumes that might not otherwise be feasible with the compute resources a researcher has at hand. MR-Tandem is designed to drop in wherever X!Tandem is already in use and requires no modification to existing X!Tandem parameter files, and only minimal modification to X!Tandem-based workflows. MR-Tandem is implemented as a lightly modified X!Tandem C++ executable and a Python script that drives Hadoop clusters including Amazon Web Services (AWS) Elastic Map Reduce (EMR), using the modified X!Tandem program as a Hadoop Streaming mapper and reducer. The modified X!Tandem C++ source code is Artistic licensed, supports pluggable scoring, and is available as part of the Sashimi project at http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/trans_proteomic_pipeline/extern/xtandem/. The MR-Tandem Python script is Apache licensed and available as part of the Insilicos Cloud Army project at http://ica.svn.sourceforge.net/viewvc/ica/trunk/mr-tandem/. Full documentation and a windows installer that configures MR-Tandem, Python and all necessary packages are available at this same URL. brian.pratt@insilicos.com
Chikh, Soufien; Watelain, Eric; Faupin, Arnaud; Pinti, Antonio; Jarraya, Mohamed; Garnier, Cyril
2016-08-01
Voluntary movement often causes postural perturbation that requires an anticipatory postural adjustment to minimize perturbation and increase the efficiency and coordination during execution. This systematic review focuses specifically on the relationship between the parameters of anticipatory muscular activities and movement finality in sitting position among adults, to study the adaptability and predictability of anticipatory muscular activities parameters to different movements and conditions in sitting position in adults. A systematic literature search was performed using PubMed, Science Direct, Web of Science, Springer-Link, Engineering Village, and EbscoHost. Inclusion and exclusion criteria were applied to retain the most rigorous and specific studies, yielding 76 articles, Seventeen articles were excluded at first reading, and after the application of inclusion and exclusion criteria, 23 were retained. In a sitting position, central nervous system activity precedes movement by diverse anticipatory muscular activities and shows the ability to adapt anticipatory muscular activity parameters to the movement direction, postural stability, or charge weight. In addition, these parameters could be adapted to the speed of execution, as found for the standing position. Parameters of anticipatory muscular activities (duration, order, and amplitude of muscle contractions constituting the anticipatory muscular activity) could be used as a predictive indicator of forthcoming movement. In addition, this systematic review may improve methodology in empirical studies and assistive technology for people with disabilities. © The Author(s) 2016.
Electronic Biomedical Literature Search for Budding Researcher
Thakre, Subhash B.; Thakre S, Sushama S.; Thakre, Amol D.
2013-01-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research. PMID:24179937
Electronic biomedical literature search for budding researcher.
Thakre, Subhash B; Thakre S, Sushama S; Thakre, Amol D
2013-09-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research.
Getting To Know the "Invisible Web."
ERIC Educational Resources Information Center
Smith, C. Brian
2001-01-01
Discusses the portions of the World Wide Web that cannot be accessed via directories or search engines, explains why they can't be accessed, and offers suggestions for reference librarians to find these sites. Lists helpful resources and gives examples of invisible Web sites which are often databases. (LRW)
Quality of Web-Based Information on Cannabis Addiction
ERIC Educational Resources Information Center
Khazaal, Yasser; Chatton, Anne; Cochand, Sophie; Zullino, Daniele
2008-01-01
This study evaluated the quality of Web-based information on cannabis use and addiction and investigated particular content quality indicators. Three keywords ("cannabis addiction," "cannabis dependence," and "cannabis abuse") were entered into two popular World Wide Web search engines. Websites were assessed with a standardized proforma designed…
ERIC Educational Resources Information Center
Huang, Yueh-Min; Liu, Chien-Hung
2009-01-01
One of the key challenges in the promotion of web-based learning is the development of effective collaborative learning environments. We posit that the structuration process strongly influences the effectiveness of technology used in web-based collaborative learning activities. In this paper, we propose an ant swarm collaborative learning (ASCL)…
Perspectives for Electronic Books in the World Wide Web Age.
ERIC Educational Resources Information Center
Bry, Francois; Kraus, Michael
2002-01-01
Discusses the rapid growth of the World Wide Web and the lack of use of electronic books and suggests that specialized contents and device independence can make Web-based books compete with print. Topics include enhancing the hypertext model of XML; client-side adaptation, including browsers and navigation; and semantic modeling. (Author/LRW)
Application of Mobile Agents in Web-Based Learning Environment.
ERIC Educational Resources Information Center
Hong Hong, Kinshuk; He, Xiaoqin; Patel, Ashok; Jesshope, Chris
Web-based learning environments are strongly driven by the information revolution and the Internet, but they have a number of common deficiencies, such as slow access, no adaptivity to the individual student, limitation by bandwidth, and more. This paper outlines the benefits of mobile agents technology, and describes its application in Web-based…
Hall, M.S.; Jackson, T.G.; Knerr, C.
1998-02-17
An improved system for measuring the velocity of ultrasonic signals within the plane of moving web-like materials, such as paper, paperboard and the like. In addition to velocity measurements of ultrasonic signals in the plane of the web in the MD and CD, one embodiment of the system in accordance with the present invention is also adapted to provide on-line indication of the polar specific stiffness of the moving web. In another embodiment of the invention, the velocity of ultrasonic signals in the plane of the web are measured by way of a plurality of ultrasonic transducers carried by synchronously driven wheels or cylinders, thus eliminating undue transducer wear due to any speed differences between the transducers and the web. In order to provide relatively constant contact force between the transducers and the webs, the transducers are mounted in a sensor housings which include a spring for biasing the transducer radially outwardly. The sensor housings are adapted to be easily and conveniently mounted to the carrier to provide a relatively constant contact force between the transducers and the moving web. 37 figs.
Hall, Maclin S.; Jackson, Theodore G.; Knerr, Christopher
1998-02-17
An improved system for measuring the velocity of ultrasonic signals within the plane of moving web-like materials, such as paper, paperboard and the like. In addition to velocity measurements of ultrasonic signals in the plane of the web in the MD and CD, one embodiment of the system in accordance with the present invention is also adapted to provide on-line indication of the polar specific stiffness of the moving web. In another embodiment of the invention, the velocity of ultrasonic signals in the plane of the web are measured by way of a plurality of ultrasonic transducers carried by synchronously driven wheels or cylinders, thus eliminating undue transducer wear due to any speed differences between the transducers and the web. In order to provide relatively constant contact force between the transducers and the webs, the transducers are mounted in a sensor housings which include a spring for biasing the transducer radially outwardly. The sensor housings are adapted to be easily and conveniently mounted to the carrier to provide a relatively constant contact force between the transducers and the moving web.
A unified architecture for biomedical search engines based on semantic web technologies.
Jalali, Vahid; Matash Borujerdi, Mohammad Reza
2011-04-01
There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.
Center for Adaptive Optics | Search
Center for Adaptive Optics A University of California Science and Technology Center home Search CfAO Google Search search: CfAO All of UCOLick.org Whole Web Search for recent Adaptive Optics news at GoogleNews! Last Modified: Sep 21, 2010 Center for Adaptive Optics | Search | The Center | Adaptive Optics
78 FR 8108 - NextGen Solutions Vendors Guide
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-05
... Commerce is developing a web-based NextGen Solutions Vendors Guide intended to be used by foreign air... being listed on the Vendors Guide Web site should submit their company's name, Web site address, contact... to aviation system upgrades) Example: Engineering Services More information on the four ICAO ASBU...
Hot Topics on the Web: Strategies for Research.
ERIC Educational Resources Information Center
Diaz, Karen R.; O'Hanlon, Nancy
2001-01-01
Presents strategies for researching topics on the Web that are controversial or current in nature. Discusses topic selection and overviews, including the use of online encyclopedias; search engines; finding laws and pending legislation; advocacy groups; proprietary databases; Web site evaluation; and the continuing usefulness of print materials.…
Lessons Learned from a Collaborative Sensor Web Prototype
NASA Technical Reports Server (NTRS)
Ames, Troy; Case, Lynne; Krahe, Chris; Hess, Melissa; Hennessy, Joseph F. (Technical Monitor)
2002-01-01
This paper describes the Sensor Web Application Prototype (SWAP) system that was developed for the Earth Science Technology Office (ESTO). The SWAP is aimed at providing an initial engineering proof-of-concept prototype highlighting sensor collaboration, dynamic cause-effect relationship between sensors, dynamic reconfiguration, and remote monitoring of sensor webs.
The Implementation of Web Conferencing Technologies in Online Graduate Classes
ERIC Educational Resources Information Center
Zotti, Robert
2017-01-01
This dissertation examines the implementation of web conferencing technology in online graduate courses within management, engineering, and computer science programs. Though the spread of learning management systems over the past two decades has been dramatic, the use of web conferencing technologies has curiously lagged. The real-time…
Social Networking on the Semantic Web
ERIC Educational Resources Information Center
Finin, Tim; Ding, Li; Zhou, Lina; Joshi, Anupam
2005-01-01
Purpose: Aims to investigate the way that the semantic web is being used to represent and process social network information. Design/methodology/approach: The Swoogle semantic web search engine was used to construct several large data sets of Resource Description Framework (RDF) documents with social network information that were encoded using the…
Analyzing Web Server Logs to Improve a Site's Usage. The Systems Librarian
ERIC Educational Resources Information Center
Breeding, Marshall
2005-01-01
This column describes ways to streamline and optimize how a Web site works in order to improve both its usability and its visibility. The author explains how to analyze logs and other system data to measure the effectiveness of the Web site design and search engine.
Understanding the Leaky Engineering Pipeline: Motivation and Job Adaptability of Female Engineers
ERIC Educational Resources Information Center
Saraswathiamma, Manjusha Thekkedathu
2010-01-01
This dissertation is a mixed-method study conducted using qualitative grounded theory and quantitative survey and correlation approaches. This study aims to explore the motivation and adaptability of females in the engineering profession and to develop a theoretical framework for both motivation and adaptability issues. As a result, this study…
Can people find patient decision aids on the Internet?
Morris, Debra; Drake, Elizabeth; Saarimaki, Anton; Bennett, Carol; O'Connor, Annette
2008-12-01
To determine if people could find patient decision aids (PtDAs) on the Internet using the most popular general search engines. We chose five medical conditions for which English language PtDAs were available from at least three different developers. The search engines used were: Google (www.google.com), Yahoo! (www.yahoo.com), and MSN (www.msn.com). For each condition and search engine we ran six searches using a combination of search terms. We coded all non-sponsored Web pages that were linked from the first page of the search results. Most first page results linked to informational Web pages about the condition, only 16% linked to PtDAs. PtDAs were more readily found for the breast cancer surgery decision (our searches found seven of the nine developers). The searches using Yahoo and Google search engines were more likely to find PtDAs. The following combination of search terms: condition, treatment, decision (e.g. breast cancer surgery decision) was most successful across all search engines (29%). While some terms and search engines were more successful, few resulted in direct links to PtDAs. Finding PtDAs would be improved with use of standardized labelling, providing patients with specific Web site addresses or access to an independent PtDA clearinghouse.
What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.
Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W
2015-06-01
Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.
Adaptive Gas Turbine Engine Control for Deterioration Compensation Due to Aging
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Parker, Khary I.; Chatterjee, Santanu
2003-01-01
This paper presents an ad hoc adaptive, multivariable controller tuning rule that compensates for a thrust response variation in an engine whose performance has been degraded though use and wear. The upset appears when a large throttle transient is performed such that the engine controller switches from low-speed to high-speed mode. A relationship was observed between the level of engine degradation and the overshoot in engine temperature ratio, which was determined to cause the thrust response variation. This relationship was used to adapt the controller. The method is shown to work very well up to the operability limits of the engine. Additionally, since the level of degradation can be estimated from sensor data, it would be feasible to implement the adaptive control algorithm on-line.
Web-Based Learning Programs: Use by Learners with Various Cognitive Styles
ERIC Educational Resources Information Center
Chen, Ling-Hsiu
2010-01-01
To consider how Web-based learning program is utilized by learners with different cognitive styles, this study presents a Web-based learning system (WBLS) and analyzes learners' browsing data recorded in the log file to identify how learners' cognitive styles and learning behavior are related. In order to develop an adapted WBLS, this study also…
The Application of an Adaptive, Web-Based Learning Environment on Oxidation-Reduction Reactions
ERIC Educational Resources Information Center
Own, Zangyuan
2006-01-01
The World Wide Web is increasingly being used as a vehicle for flexible learning, where learning is seen to be free from time, geographical, and participation constraints. In addition to flexibility, the Web facilitates student-centered approaches, creating a motivating and active learning environment. The purpose of this study is to set up an…
Enabling Incremental Query Re-Optimization.
Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau
2016-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.
Enabling Incremental Query Re-Optimization
Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau
2017-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658
'Sciencenet'--towards a global search and share engine for all scientific knowledge.
Lütjohann, Dominic S; Shah, Asmi H; Christen, Michael P; Richter, Florian; Knese, Karsten; Liebel, Urban
2011-06-15
Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, 'Sciencenet', which facilitates rapid searching over this large data space. By 'bringing the search engine to the data', we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the 'AskMe' experiment publisher is written in Python 2.7, and the backend 'YaCy' search engine is based on Java 1.6.
NASA Astrophysics Data System (ADS)
Habib, E. H.; Tarboton, D. G.; Lall, U.; Bodin, M.; Rahill-Marier, B.; Chimmula, S.; Meselhe, E. A.; Ali, A.; Williams, D.; Ma, Y.
2013-12-01
The hydrologic community has long recognized the need for broad reform in hydrologic education. A paradigm shift is critically sought in undergraduate hydrology and water resource education by adopting context-rich, student-centered, and active learning strategies. Hydrologists currently deal with intricate issues rooted in complex natural ecosystems containing a multitude of interconnected processes. Advances in the multi-disciplinary field include observational settings such as Critical Zone and Water, Sustainability and Climate Observatories, Hydrologic Information Systems, instrumentation and modeling methods. These research advances theory and practices call for similar efforts and improvements in hydrologic education. The typical, text-book based approach in hydrologic education has focused on specific applications and/or unit processes associated with the hydrologic cycle with idealizations, rather than the contextual relations in the physical processes and the spatial and temporal dynamics connecting climate and ecosystems. An appreciation of the natural variability of these processes will lead to graduates with the ability to develop independent learning skills and understanding. This appreciation cannot be gained in curricula where field components such as observational and experimental data are deficient. These types of data are also critical when using simulation models to create environments that support this type of learning. Additional sources of observations in conjunction with models and field data are key to students understanding of the challenges associated with using models to represent such complex systems. Recent advances in scientific visualization and web-based technologies provide new opportunities for the development of active learning techniques utilizing ongoing research. The overall goal of the current study is to develop visual, case-based, data and simulation driven learning experiences to instructors and students through a web server-based system. Open source web technologies and community-based tools are used to facilitate wide dissemination and adaptation by diverse, independent institutions. The new hydrologic learning modules are based on recent developments in hydrologic modeling, data, and resources. The modules are embedded in three regional-scale ecosystems, Coastal Louisiana, Florida Everglades, and Utah Great Salt Lake Basin. These sites provide a wealth of hydrologic concepts and scenarios that can be used in most water resource and hydrology curricula. The study develops several learning modules based on the three hydro-systems covering subjects such as: water-budget analysis, effects of human and natural changes, climate-hydrology teleconnections, and water-resource management scenarios. The new developments include an instructional interface to give critical guidance and support to the learner and an instructor's guide containing adaptation and implementation procedures to assist instructors in adopting and integrating the material into courses and provide a consistent experience. The design of the new hydrologic education developments will be transferable to independent institutions and adaptable both instructionally and technically through a server system capable of supporting additional developments by the educational community.
Rivera, Daniel E; Pew, Michael D; Collins, Linda M
2007-05-01
The goal of this paper is to describe the role that control engineering principles can play in developing and improving the efficacy of adaptive, time-varying interventions. It is demonstrated that adaptive interventions constitute a form of feedback control system in the context of behavioral health. Consequently, drawing from ideas in control engineering has the potential to significantly inform the analysis, design, and implementation of adaptive interventions, leading to improved adherence, better management of limited resources, a reduction of negative effects, and overall more effective interventions. This article illustrates how to express an adaptive intervention in control engineering terms, and how to use this framework in a computer simulation to investigate the anticipated impact of intervention design choices on efficacy. The potential benefits of operationalizing decision rules based on control engineering principles are particularly significant for adaptive interventions that involve multiple components or address co-morbidities, situations that pose significant challenges to conventional clinical practice.
Rivera, Daniel E.; Pew, Michael D.; Collins, Linda M.
2007-01-01
The goal of this paper is to describe the role that control engineering principles can play in developing and improving the efficacy of adaptive, time-varying interventions. It is demonstrated that adaptive interventions constitute a form of feedback control system in the context of behavioral health. Consequently, drawing from ideas in control engineering has the potential to significantly inform the analysis, design, and implementation of adaptive interventions, leading to improved adherence, better management of limited resources, a reduction of negative effects, and overall more effective interventions. This article illustrates how to express an adaptive intervention in control engineering terms, and how to use this framework in a computer simulation to investigate the anticipated impact of intervention design choices on efficacy. The potential benefits of operationalizing decision rules based on control engineering principles are particularly significant for adaptive interventions that involve multiple components or address co-morbidities, situations that pose significant challenges to conventional clinical practice. PMID:17169503
Open Biomedical Engineering education in Africa.
Ahluwalia, Arti; Atwine, Daniel; De Maria, Carmelo; Ibingira, Charles; Kipkorir, Emmauel; Kiros, Fasil; Madete, June; Mazzei, Daniele; Molyneux, Elisabeth; Moonga, Kando; Moshi, Mainen; Nzomo, Martin; Oduol, Vitalice; Okuonzi, John
2015-08-01
Despite the virtual revolution, the mainstream academic community in most countries remains largely ignorant of the potential of web-based teaching resources and of the expansion of open source software, hardware and rapid prototyping. In the context of Biomedical Engineering (BME), where human safety and wellbeing is paramount, a high level of supervision and quality control is required before open source concepts can be embraced by universities and integrated into the curriculum. In the meantime, students, more than their teachers, have become attuned to continuous streams of digital information, and teaching methods need to adapt rapidly by giving them the skills to filter meaningful information and by supporting collaboration and co-construction of knowledge using open, cloud and crowd based technology. In this paper we present our experience in bringing these concepts to university education in Africa, as a way of enabling rapid development and self-sufficiency in health care. We describe the three summer schools held in sub-Saharan Africa where both students and teachers embraced the philosophy of open BME education with enthusiasm, and discuss the advantages and disadvantages of opening education in this way in the developing and developed world.
Tracing medical information over the Internet.
Mutairi, S M
2000-05-01
The Internet became with do doubt a huge and valuable source of information for researchers. The wealth of information on the Internet is second to none and medical information is no exception. Yet with the vast expansion of the Internet and the World Wide Web in specie, to find the kind of information one is looking for, he/she needs to browse thousands of web sites and the experience would be like digging into a stack of hay looking for a needle. That's why search engines and subject indexes, as means to overcome this problem, were introduced and grew so rapidly. In general, there are three approaches to retrieve data from the World Wide Web; the subject directories, search engines and detailed subject indexes. However, there is no single comprehensive search engine or directory and it is recommended to use more than one with different keywords and synonymous.
Mining Hidden Gems Beneath the Surface: A Look At the Invisible Web.
ERIC Educational Resources Information Center
Carlson, Randal D.; Repman, Judi
2002-01-01
Describes resources for researchers called the Invisible Web that are hidden from the usual search engines and other tools and contrasts them with those resources available on the surface Web. Identifies specialized search tools, databases, and strategies that can be used to locate credible in-depth information. (Author/LRW)
Use of Web Search Engines and Personalisation in Information Searching for Educational Purposes
ERIC Educational Resources Information Center
Salehi, Sara; Du, Jia Tina; Ashman, Helen
2018-01-01
Introduction: Students increasingly depend on Web search for educational purposes. This causes concerns among education providers as some evidence indicates that in higher education, the disadvantages of Web search and personalised information are not justified by the benefits. Method: One hundred and twenty university students were surveyed about…
Modeling Rich Interactions for Web Search Intent Inference, Ranking and Evaluation
ERIC Educational Resources Information Center
Guo, Qi
2012-01-01
Billions of people interact with Web search engines daily and their interactions provide valuable clues about their interests and preferences. While modeling search behavior, such as queries and clicks on results, has been found to be effective for various Web search applications, the effectiveness of the existing approaches are limited by…
Noise and Vibration Risk Prevention Virtual Web for Ubiquitous Training
ERIC Educational Resources Information Center
Redel-Macías, María Dolores; Cubero-Atienza, Antonio J.; Martínez-Valle, José Miguel; Pedrós-Pérez, Gerardo; del Pilar Martínez-Jiménez, María
2015-01-01
This paper describes a new Web portal offering experimental labs for ubiquitous training of university engineering students in work-related risk prevention. The Web-accessible computer program simulates the noise and machine vibrations met in the work environment, in a series of virtual laboratories that mimic an actual laboratory and provide the…
10 Ways To Take Charge of the Web. Easy Strategies for Internet Smarts.
ERIC Educational Resources Information Center
Wood, Julie M.
2000-01-01
Strategies to help teachers use the Internet effectively include: explore individual interests online; develop acceptable use policies; narrow the playing field; know search engines; use filters; utilize the World Wide Web to lighten the load; teach students to investigate websites effectively; use the Web for professional development; teach…
Multitasking Web Searching and Implications for Design.
ERIC Educational Resources Information Center
Ozmutlu, Seda; Ozmutlu, H. C.; Spink, Amanda
2003-01-01
Findings from a study of users' multitasking searches on Web search engines include: multitasking searches are a noticeable user behavior; multitasking search sessions are longer than regular search sessions in terms of queries per session and duration; both Excite and AlltheWeb.com users search for about three topics per multitasking session and…
Development and Evaluation of Mechatronics Learning System in a Web-Based Environment
ERIC Educational Resources Information Center
Shyr, Wen-Jye
2011-01-01
The development of remote laboratory suitable for the reinforcement of undergraduate level teaching of mechatronics is important. For the reason, a Web-based mechatronics learning system, called the RECOLAB (REmote COntrol LABoratory), for remote learning in engineering education has been developed in this study. The web-based environment is an…
A Web Service and Interface for Remote Electronic Device Characterization
ERIC Educational Resources Information Center
Dutta, S.; Prakash, S.; Estrada, D.; Pop, E.
2011-01-01
A lightweight Web Service and a Web site interface have been developed, which enable remote measurements of electronic devices as a "virtual laboratory" for undergraduate engineering classes. Using standard browsers without additional plugins (such as Internet Explorer, Firefox, or even Safari on an iPhone), remote users can control a Keithley…
QUEST: An Assessment Tool for Web-Based Learning.
ERIC Educational Resources Information Center
Choren, Ricardo; Blois, Marcelo; Fuks, Hugo
In 1997, the Software Engineering Laboratory at Pontifical Catholic University of Rio de Janeiro (Brazil) implemented the first version of AulaNet (TM) a World Wide Web-based educational environment. Some of the teaching staff will use this environment in 1998 to offer regular term disciplines through the Web. This paper introduces Quest, a tool…
Millennial Undergraduate Research Strategies in Web and Library Information Retrieval Systems
ERIC Educational Resources Information Center
Porter, Brandi
2011-01-01
This article summarizes the author's dissertation regarding search strategies of millennial undergraduate students in Web and library online information retrieval systems. Millennials bring a unique set of search characteristics and strategies to their research since they have never known a world without the Web. Through the use of search engines,…
Communication Webagogy 2.0: More Click, Less Drag.
ERIC Educational Resources Information Center
Radford, Marie L.; Wagner, Kurt W.
2000-01-01
Argues that, because of the chaotic nature of the Web and the competing searching software, no single search tool will suffice. Lists and discusses meta search engines; communication meta-sites and subject directories, all indexed by humans; teaching resources for communication courses that utilize the unique features of the Web; and web sites…
Guide to the Internet. The world wide web.
Pallen, M.
1995-01-01
The world wide web provides a uniform, user friendly interface to the Internet. Web pages can contain text and pictures and are interconnected by hypertext links. The addresses of web pages are recorded as uniform resource locators (URLs), transmitted by hypertext transfer protocol (HTTP), and written in hypertext markup language (HTML). Programs that allow you to use the web are available for most operating systems. Powerful on line search engines make it relatively easy to find information on the web. Browsing through the web--"net surfing"--is both easy and enjoyable. Contributing to the web is not difficult, and the web opens up new possibilities for electronic publishing and electronic journals. Images p1554-a Fig 5 PMID:8520402
Patient empowerment by increasing the understanding of medical language for lay users.
Topac, V; Stoicu-Tivadar, V
2013-01-01
Patient empowerment is important in order to increase the quality of medical care and the life quality of the patients. An important obstacle for empowering patients is the language barrier the lay patient encounter when accessing medical information. To design and develop a service that will help increase the understanding of medical language for lay persons. The service identifies and explains medical terminology from a given text by annotating the terms in the original text with the definition. It is based on an original terminology interpretation engine that uses a fuzzy matching dictionary. The service was implemented in two projects: a) into the server of a tele-care system (TELEASIS) with the purpose of adapting medical text assigned by medical personnel for the assisted patients. b) Into a dedicated web site that can adapt the medical language from raw text or from existing web pages. The output of the service was evaluated by a group of persons, and the results indicate that such a system can increase the understanding of medical texts. Several design decisions were driven from the evaluation, and are being considered for future development. Other tests measuring accuracy and time performance for the fuzzy terminology recognition have been performed. Test results revealed good performance for accuracy and excellent results regarding time performance. The current version of the service increases the accessibility of medical language by explaining terminology with a good accuracy, while allowing the user to easily identify errors, in order to reduce the risk of incorrect terminology recognition.
From Web accessibility to Web adaptability.
Kelly, Brian; Nevile, Liddy; Sloan, David; Fanou, Sotiris; Ellison, Ruth; Herrod, Lisa
2009-07-01
This article asserts that current approaches to enhance the accessibility of Web resources fail to provide a solid foundation for the development of a robust and future-proofed framework. In particular, they fail to take advantage of new technologies and technological practices. The article introduces a framework for Web adaptability, which encourages the development of Web-based services that can be resilient to the diversity of uses of such services, the target audience, available resources, technical innovations, organisational policies and relevant definitions of 'accessibility'. The article refers to a series of author-focussed approaches to accessibility through which the authors and others have struggled to find ways to promote accessibility for people with disabilities. These approaches depend upon the resource author's determination of the anticipated users' needs and their provision. Through approaches labelled as 1.0, 2.0 and 3.0, the authors have widened their focus to account for contexts and individual differences in target audiences. Now, the authors want to recognise the role of users in determining their engagement with resources (including services). To distinguish this new approach, the term 'adaptability' has been used to replace 'accessibility'; new definitions of accessibility have been adopted, and the authors have reviewed their previous work to clarify how it is relevant to the new approach. Accessibility 1.0 is here characterised as a technical approach in which authors are told how to construct resources for a broadly defined audience. This is known as universal design. Accessibility 2.0 was introduced to point to the need to account for the context in which resources would be used, to help overcome inadequacies identified in the purely technical approach. Accessibility 3.0 moved the focus on users from a homogenised universal definition to recognition of the idiosyncratic needs and preferences of individuals and to cater for them. All of these approaches placed responsibility within the authoring/publishing domain without recognising the role the user might want to play, or the roles that other users in social networks, or even Web services might play. Adaptability shifts the emphasis and calls for greater freedom for the users to facilitate individual accessibility in the open Web environment.
Allen, J W; Finch, R J; Coleman, M G; Nathanson, L K; O'Rourke, N A; Fielding, G A
2002-01-01
This study was undertaken to determine the quality of information on the Internet regarding laparoscopy. Four popular World Wide Web search engines were used with the key word "laparoscopy." Advertisements, patient- or physician-directed information, and controversial material were noted. A total of 14,030 Web pages were found, but only 104 were unique Web sites. The majority of the sites were duplicate pages, subpages within a main Web page, or dead links. Twenty-eight of the 104 pages had a medical product for sale, 26 were patient-directed, 23 were written by a physician or group of physicians, and six represented corporations. The remaining 21 were "miscellaneous." The 46 pages containing educational material were critically reviewed. At least one of the senior authors found that 32 of the pages contained controversial or misleading statements. All of the three senior authors (LKN, NAO, GAF) independently agreed that 17 of the 46 pages contained controversial information. The World Wide Web is not a reliable source for patient or physician information about laparoscopy. Authenticating medical information on the World Wide Web is a difficult task, and no government or surgical society has taken the lead in regulating what is presented as fact on the World Wide Web.
Taboada, María; Martínez, Diego; Pilo, Belén; Jiménez-Escrig, Adriano; Robinson, Peter N; Sobrido, María J
2012-07-31
Semantic Web technology can considerably catalyze translational genetics and genomics research in medicine, where the interchange of information between basic research and clinical levels becomes crucial. This exchange involves mapping abstract phenotype descriptions from research resources, such as knowledge databases and catalogs, to unstructured datasets produced through experimental methods and clinical practice. This is especially true for the construction of mutation databases. This paper presents a way of harmonizing abstract phenotype descriptions with patient data from clinical practice, and querying this dataset about relationships between phenotypes and genetic variants, at different levels of abstraction. Due to the current availability of ontological and terminological resources that have already reached some consensus in biomedicine, a reuse-based ontology engineering approach was followed. The proposed approach uses the Ontology Web Language (OWL) to represent the phenotype ontology and the patient model, the Semantic Web Rule Language (SWRL) to bridge the gap between phenotype descriptions and clinical data, and the Semantic Query Web Rule Language (SQWRL) to query relevant phenotype-genotype bidirectional relationships. The work tests the use of semantic web technology in the biomedical research domain named cerebrotendinous xanthomatosis (CTX), using a real dataset and ontologies. A framework to query relevant phenotype-genotype bidirectional relationships is provided. Phenotype descriptions and patient data were harmonized by defining 28 Horn-like rules in terms of the OWL concepts. In total, 24 patterns of SWQRL queries were designed following the initial list of competency questions. As the approach is based on OWL, the semantic of the framework adapts the standard logical model of an open world assumption. This work demonstrates how semantic web technologies can be used to support flexible representation and computational inference mechanisms required to query patient datasets at different levels of abstraction. The open world assumption is especially good for describing only partially known phenotype-genotype relationships, in a way that is easily extensible. In future, this type of approach could offer researchers a valuable resource to infer new data from patient data for statistical analysis in translational research. In conclusion, phenotype description formalization and mapping to clinical data are two key elements for interchanging knowledge between basic and clinical research.
A study of an adaptive replication framework for orchestrated composite web services.
Mohamed, Marwa F; Elyamany, Hany F; Nassar, Hamed M
2013-01-01
Replication is considered one of the most important techniques to improve the Quality of Services (QoS) of published Web Services. It has achieved impressive success in managing resource sharing and usage in order to moderate the energy consumed in IT environments. For a robust and successful replication process, attention should be paid to suitable time as well as the constraints and capabilities in which the process runs. The replication process is time-consuming since outsourcing some new replicas into other hosts is lengthy. Furthermore, nowadays, most of the business processes that might be implemented over the Web are composed of multiple Web services working together in two main styles: Orchestration and Choreography. Accomplishing a replication over such business processes is another challenge due to the complexity and flexibility involved. In this paper, we present an adaptive replication framework for regular and orchestrated composite Web services. The suggested framework includes a number of components for detecting unexpected and unhappy events that might occur when consuming the original published web services including failure or overloading. It also includes a specific replication controller to manage the replication process and select the best host that would encapsulate a new replica. In addition, it includes a component for predicting the incoming load in order to decrease the time needed for outsourcing new replicas, enhancing the performance greatly. A simulation environment has been created to measure the performance of the suggested framework. The results indicate that adaptive replication with prediction scenario is the best option for enhancing the performance of the replication process in an online business environment.
Workflow-Based Software Development Environment
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
2013-01-01
The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment
Yarkoni, Tal
2012-01-01
Traditional pre-publication peer review of scientific output is a slow, inefficient, and unreliable process. Efforts to replace or supplement traditional evaluation models with open evaluation platforms that leverage advances in information technology are slowly gaining traction, but remain in the early stages of design and implementation. Here I discuss a number of considerations relevant to the development of such platforms. I focus particular attention on three core elements that next-generation evaluation platforms should strive to emphasize, including (1) open and transparent access to accumulated evaluation data, (2) personalized and highly customizable performance metrics, and (3) appropriate short-term incentivization of the userbase. Because all of these elements have already been successfully implemented on a large scale in hundreds of existing social web applications, I argue that development of new scientific evaluation platforms should proceed largely by adapting existing techniques rather than engineering entirely new evaluation mechanisms. Successful implementation of open evaluation platforms has the potential to substantially advance both the pace and the quality of scientific publication and evaluation, and the scientific community has a vested interest in shifting toward such models as soon as possible. PMID:23060783
Compliant threads maximize spider silk connection strength and toughness
Meyer, Avery; Pugno, Nicola M.; Cranford, Steven W.
2014-01-01
Millions of years of evolution have adapted spider webs to achieve a range of functionalities, including the well-known capture of prey, with efficient use of material. One feature that has escaped extensive investigation is the silk-on-silk connection joints within spider webs, particularly from a structural mechanics perspective. We report a joint theoretical and computational analysis of an idealized silk-on-silk fibre junction. By modifying the theory of multiple peeling, we quantitatively compare the performance of the system while systematically increasing the rigidity of the anchor thread, by both scaling the stress–strain response and the introduction of an applied pre-strain. The results of our study indicate that compliance is a virtue—the more extensible the anchorage, the tougher and stronger the connection becomes. In consideration of the theoretical model, in comparison with rigid substrates, a compliant anchorage enormously increases the effective adhesion strength (work required to detach), independent of the adhered thread itself, attributed to a nonlinear alignment between thread and anchor (contact peeling angle). The results can direct novel engineering design principles to achieve possible load transfer from compliant fibre-to-fibre anchorages, be they silk-on-silk or another, as-yet undeveloped, system. PMID:25008083
Digital dissemination platform of transportation engineering education materials.
DOT National Transportation Integrated Search
2014-09-01
National agencies have called for more widespread adoption of best practices in engineering education. To facilitate this sharing of practices we will develop a web-based system that will be used by transportation engineering educators to share curri...
Center for Adaptive Optics | AO Summer School
School on Adaptive Optics Sponsored by: Center for Adaptive Optics The AO Summer School instruction is Adaptive Optics and their implementation. Our Summer School is intended to facilitate and encourage previous summer school web pages. Please contact us, if you would like more information on AO Summer School
Adaptable Learning Pathway Generation with Ant Colony Optimization
ERIC Educational Resources Information Center
Wong, Lung-Hsiang; Looi, Chee-Kit
2009-01-01
One of the new major directions in research on web-based educational systems is the notion of adaptability: the educational system adapts itself to the learning profile, preferences and ability of the student. In this paper, we look into the issues of providing adaptability with respect to learning pathways. We explore the state of the art with…
Transforming Systems Engineering through Model Centric Engineering
2017-08-08
12 Figure 5. Semantic Web Technologies related to Layers of Abstraction ................................. 23 Figure 6. NASA /JPL Instantiation...of OpenMBEE (circa 2014) ................................................. 24 Figure 7. NASA /JPL Foundational Ontology for Systems Engineering...Engineering (DE) Transformation initiative, and our relationship that we have fostered with National Aeronautics and Space Administration ( NASA ) Jet
Using Web Logs in the Science Classroom
ERIC Educational Resources Information Center
Duplichan, Staycle C.
2009-01-01
As educators we must ask ourselves if we are meeting the needs of today's students. The science world is adapting to our ever-changing society; are the methodology and philosophy of our educational system keeping up? In this article, you'll learn why web logs (also called blogs) are an important Web 2.0 tool in your science classroom and how they…
ERIC Educational Resources Information Center
Wang, Tzu-Hua; Wang, Wei-Lung; Wang, Kuo-Hua; Huang, Shih-Chieh
The study attempted to adapt two web tools, FFS system (Frontpage Feedback System) and WATA system (Web-based Assessment and Test Analysis System), to construct a Hi-FAME (High Feedback-Assessment-Multimedia-Environment) Model in WBI (Web-based Instruction) to facilitate pre-service teacher training. Participants were 30 junior pre-service…
ERIC Educational Resources Information Center
Stockwell, Esther
2016-01-01
This study adapted web-based exploratory tasks using WebQuests as a means of enabling students to understand and reflect on both the target and their own culture. Learners actively used various authentic resources selected to meet their linguistic and cognitive needs to complete the tasks. The aim of this study was to help Japanese university…
Sensor Web Dynamic Measurement Techniques and Adaptive Observing Strategies
NASA Technical Reports Server (NTRS)
Talabac, Stephen J.
2004-01-01
Sensor Web observing systems may have the potential to significantly improve our ability to monitor, understand, and predict the evolution of rapidly evolving, transient, or variable environmental features and events. This improvement will come about by integrating novel data collection techniques, new or improved instruments, emerging communications technologies and protocols, sensor mark-up languages, and interoperable planning and scheduling systems. In contrast to today's observing systems, "event-driven" sensor webs will synthesize real- or near-real time measurements and information from other platforms and then react by reconfiguring the platforms and instruments to invoke new measurement modes and adaptive observation strategies. Similarly, "model-driven" sensor webs will utilize environmental prediction models to initiate targeted sensor measurements or to use a new observing strategy. The sensor web concept contrasts with today's data collection techniques and observing system operations concepts where independent measurements are made by remote sensing and in situ platforms that do not share, and therefore cannot act upon, potentially useful complementary sensor measurement data and platform state information. This presentation describes NASA's view of event-driven and model-driven Sensor Webs and highlights several research and development activities at the Goddard Space Flight Center.
1972-08-21
Lexington, Massachusetts high school student, Judith Miles, discusses her proposed Skylab experiment with Keith Demorest (right) and Henry Floyd, both of Marshall Space Flight Center (MSFC). In her experiment, called the “Web Formation in Zero Gravity”, called for spiders to be released into a box and their actions recorded to determine how well they adapt to the absence of gravity. Spiders are known to adapt quickly to other changes in the environment but nothing was known of their ability to adapt to weightlessness. At the same time spiders were weaving webs in Earth orbit, similar spiders were spinning webs in identical boxes on Earth under full gravity conditions. Miles was among the 25 winners of a contest in which some 3,500 high school students proposed experiments for the following year’s Skylab Mission. Of the 25 students, 6 did not see their experiments conducted on Skylab because the experiments were not compatible with Skylab hardware and timelines. Of the 19 remaining, 11 experiments required the manufacture of equipment.
Miles Discusses Experiment With NASA Personnel
NASA Technical Reports Server (NTRS)
1972-01-01
Lexington, Massachusetts high school student, Judith Miles, discusses her proposed Skylab experiment with Keith Demorest (right) and Henry Floyd, both of Marshall Space Flight Center (MSFC). In her experiment, called the 'Web Formation in Zero Gravity', called for spiders to be released into a box and their actions recorded to determine how well they adapt to the absence of gravity. Spiders are known to adapt quickly to other changes in the environment but nothing was known of their ability to adapt to weightlessness. At the same time spiders were weaving webs in Earth orbit, similar spiders were spinning webs in identical boxes on Earth under full gravity conditions. Miles was among the 25 winners of a contest in which some 3,500 high school students proposed experiments for the following year's Skylab Mission. Of the 25 students, 6 did not see their experiments conducted on Skylab because the experiments were not compatible with Skylab hardware and timelines. Of the 19 remaining, 11 experiments required the manufacture of equipment.
75 FR 52255 - Airworthiness Directives; Air Tractor, Inc. Models AT-802 and AT-802A Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-25
... replacement at whichever Follow Snow Engineering Co. Service caps, the web plates, the center of the following... Engineering Co. Service Letter 80GG, revised December 21, 2005; Snow Engineering Co. Service Letter 284, dated October 4, 2009; Snow Engineering Co. Service Letter 281, dated August 1, 2009; Snow Engineering Co...
Integrated gas turbine engine-nacelle
NASA Technical Reports Server (NTRS)
Adamson, A. P.; Sargisson, D. F.; Stotler, C. L., Jr. (Inventor)
1977-01-01
A nacelle for use with a gas turbine engine is presented. An integral webbed structure resembling a spoked wheel for rigidly interconnecting the nacelle and engine, provides lightweight support. The inner surface of the nacelle defines the outer limits of the engine motive fluid flow annulus while the outer surface of the nacelle defines a streamlined envelope for the engine.
2011-01-01
Background To develop a web-based computer adaptive testing (CAT) application for efficiently collecting data regarding workers' perceptions of job satisfaction, we examined whether a 37-item Job Content Questionnaire (JCQ-37) could evaluate the job satisfaction of individual employees as a single construct. Methods The JCQ-37 makes data collection via CAT on the internet easy, viable and fast. A Rasch rating scale model was applied to analyze data from 300 randomly selected hospital employees who participated in job-satisfaction surveys in 2008 and 2009 via non-adaptive and computer-adaptive testing, respectively. Results Of the 37 items on the questionnaire, 24 items fit the model fairly well. Person-separation reliability for the 2008 surveys was 0.88. Measures from both years and item-8 job satisfaction for groups were successfully evaluated through item-by-item analyses by using t-test. Workers aged 26 - 35 felt that job satisfaction was significantly worse in 2009 than in 2008. Conclusions A Web-CAT developed in the present paper was shown to be more efficient than traditional computer-based or pen-and-paper assessments at collecting data regarding workers' perceptions of job content. PMID:21496311
National Institute of Standards and Technology Data Gateway
SRD 30 NIST Structural Ceramics Database (Web, free access) The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.
The Framework of Intervention Engine Based on Learning Analytics
ERIC Educational Resources Information Center
Sahin, Muhittin; Yurdugül, Halil
2017-01-01
Learning analytics primarily deals with the optimization of learning environments and the ultimate goal of learning analytics is to improve learning and teaching efficiency. Studies on learning analytics seem to have been made in the form of adaptation engine and intervention engine. Adaptation engine studies are quite widespread, but intervention…
Trapp, Jamie
2016-12-01
There are often differences in a publication's citation count, depending on the database accessed. Here, aspects of citation counts for medical physics and biomedical engineering papers are studied using papers published in the journal Australasian physical and engineering sciences in medicine. Comparison is made between the Web of Science, Scopus, and Google Scholar. Papers are categorised into subject matter, and citation trends are examined. It is shown that review papers as a group tend to receive more citations on average; however the highest cited individual papers are more likely to be research papers.
WebGLORE: a web service for Grid LOgistic REgression.
Jiang, Wenchao; Li, Pinghao; Wang, Shuang; Wu, Yuan; Xue, Meng; Ohno-Machado, Lucila; Jiang, Xiaoqian
2013-12-15
WebGLORE is a free web service that enables privacy-preserving construction of a global logistic regression model from distributed datasets that are sensitive. It only transfers aggregated local statistics (from participants) through Hypertext Transfer Protocol Secure to a trusted server, where the global model is synthesized. WebGLORE seamlessly integrates AJAX, JAVA Applet/Servlet and PHP technologies to provide an easy-to-use web service for biomedical researchers to break down policy barriers during information exchange. http://dbmi-engine.ucsd.edu/webglore3/. WebGLORE can be used under the terms of GNU general public license as published by the Free Software Foundation.
Query-Structure Based Web Page Indexing
2012-11-01
the massive amount of data present on the web. In our third participation in the web track at TREC 2012, we explore the idea of building an...the ad-hoc and diversity task. 1 INTRODUCTION The rapid growth and massive quantities of data on the Internet have increased the importance and...complexity of information retrieval systems. The amount and the diversity of the web data introduce shortcomings in the way search engines rank their
ERIC Educational Resources Information Center
Tunender, Heather; Ervin, Jane
1998-01-01
Character strings were planted in a World Wide Web site (Project Whistlestop) to test indexing and retrieval rates of five Web search tools (Lycos, infoseek, AltaVista, Yahoo, Excite). It was found that search tools indexed few of the planted character strings, none indexed the META descriptor tag, and only Excite indexed into the 3rd-4th site…
Talking Trash on the Internet: Working Real Data into Your Classroom.
ERIC Educational Resources Information Center
Lynch, Maurice P.; Walton, Susan A.
1998-01-01
Describes how a middle school teacher used the Chesapeake Bay National Estuarine Research Reserve in Virginia (CBNERRVA) Web site to provide scientific data for a unit on recycling. Includes sample data sheets and tables, charts results of a Web search for marine debris using different search engines, and lists selected marine data Web sites. (PEN)
47 CFR 73.8000 - Incorporation by reference.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Engineering and Technology (OET) Web site: http://www.fcc.gov/oet/info/documents/bulletins/. (1) OET Bulletin...., Suite 1200, Washington, DC 20006, or at the ATSC Web site: http://www.atsc.org/standards.html. (1) ATSC... Standards Institute (ANSI), 25 West 43rd Street, 4th Floor, New York, NY 10036 or at the ANSI Web site: http...
In Search of a Better Search Engine
ERIC Educational Resources Information Center
Kolowich, Steve
2009-01-01
Early this decade, the number of Web-based documents stored on the servers of the University of Florida hovered near 300,000. By the end of 2006, that number had leapt to four million. Two years later, the university hosts close to eight million Web documents. Web sites for colleges and universities everywhere have become repositories for data…
Searching the Web: The Public and Their Queries.
ERIC Educational Resources Information Center
Spink, Amanda; Wolfram, Dietmar; Jansen, Major B. J.; Saracevic, Tefko
2001-01-01
Reports findings from a study of searching behavior by over 200,000 users of the Excite search engine. Analysis of over one million queries revealed most people use few search terms, few modified queries, view few Web pages, and rarely use advanced search features. Concludes that Web searching by the public differs significantly from searching of…
Users' Perceptions of the Web As Revealed by Transaction Log Analysis.
ERIC Educational Resources Information Center
Moukdad, Haidar; Large, Andrew
2001-01-01
Describes the results of a transaction log analysis of a Web search engine, WebCrawler, to analyze user's queries for information retrieval. Results suggest most users do not employ advanced search features, and the linguistic structure often resembles a human-human communication model that is not always successful in human-computer communication.…
ERIC Educational Resources Information Center
Tillotson, Joy
2003-01-01
Describes a survey that was conducted involving participants in the library instruction program at two Canadian universities in order to describe the characteristics of students receiving instruction in Web searching. Examines criteria for evaluating Web sites, search strategies, use of search engines, and frequency of use. Questionnaire is…
NASA Astrophysics Data System (ADS)
Watanabe, W. M.; Candido, A.; Amâncio, M. A.; De Oliveira, M.; Pardo, T. A. S.; Fortes, R. P. M.; Aluísio, S. M.
2010-12-01
This paper presents an approach for assisting low-literacy readers in accessing Web online information. The "Educational FACILITA" tool is a Web content adaptation tool that provides innovative features and follows more intuitive interaction models regarding accessibility concerns. Especially, we propose an interaction model and a Web application that explore the natural language processing tasks of lexical elaboration and named entity labeling for improving Web accessibility. We report on the results obtained from a pilot study on usability analysis carried out with low-literacy users. The preliminary results show that "Educational FACILITA" improves the comprehension of text elements, although the assistance mechanisms might also confuse users when word sense ambiguity is introduced, by gathering, for a complex word, a list of synonyms with multiple meanings. This fact evokes a future solution in which the correct sense for a complex word in a sentence is identified, solving this pervasive characteristic of natural languages. The pilot study also identified that experienced computer users find the tool to be more useful than novice computer users do.
CDAPubMed: a browser extension to retrieve EHR-based biomedical literature.
Perez-Rey, David; Jimenez-Castellanos, Ana; Garcia-Remesal, Miguel; Crespo, Jose; Maojo, Victor
2012-04-05
Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems.
CDAPubMed: a browser extension to retrieve EHR-based biomedical literature
2012-01-01
Background Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. Results We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. Conclusions CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems. PMID:22480327
Motivational and adaptational factors of successful women engineers
NASA Astrophysics Data System (ADS)
Bornsen, Susan Edith
It is no surprise that there is a shortage of women engineers. The reasons for the shortage have been researched and discussed in myriad papers, and suggestions for improvement continue to evolve. However, there are few studies that have specifically identified the positive aspects that attract women to engineering and keep them actively engaged in the field. This paper examines how women engineers view their education, their work, and their motivation to remain in the field. A qualitative research design was used to understand the motivation and adaptability factors women use to support their decision to major in engineering and stay in the engineering profession. Women engineers were interviewed using broad questions about motivation and adaptability. Interviews were transcribed and coded, looking for common threads of factors that suggest not only why women engineers persist in the field, but also how they thrive. Findings focus on the experiences, insights, and meaning of women interviewed. A grounded theory approach was used to describe the success factors found in practicing women engineers. The study found categories of attraction to the field, learning environment, motivation and adaptability. Sub-categories of motivation are intrinsic motivational factors such as the desire to make a difference, as well as extrinsic factors such as having an income that allows the kind of lifestyle that supports the family. Women engineers are comfortable with and enjoy working with male peers and when barriers arise, women learn to adapt in the male dominated field. Adaptability was indicated in areas of gender, culture, and communication. Women found strength in the ability to 'read' their clients, and provide insight to their teams. Sufficient knowledge from the field advances theory and offers strategies to programs for administrators and faculty of schools of engineering as well as engineering firms, who have interest in recruitment, and retention of female students. Future research includes expanding the research to other areas of the United States, and improving engineering education pedagogy with more active and experiential learning.
Robotic Mission to Mars: Hands-on, minds-on, web-based learning
NASA Astrophysics Data System (ADS)
Mathers, Naomi; Goktogen, Ali; Rankin, John; Anderson, Marion
2012-11-01
Problem-based learning has been demonstrated as an effective methodology for developing analytical skills and critical thinking. The use of scenario-based learning incorporates problem-based learning whilst encouraging students to collaborate with their colleagues and dynamically adapt to their environment. This increased interaction stimulates a deeper understanding and the generation of new knowledge. The Victorian Space Science Education Centre (VSSEC) uses scenario-based learning in its Mission to Mars, Mission to the Orbiting Space Laboratory and Primary Expedition to the M.A.R.S. Base programs. These programs utilize methodologies such as hands-on applications, immersive-learning, integrated technologies, critical thinking and mentoring to engage students in Science, Technology, Engineering and Mathematics (STEM) and highlight potential career paths in science and engineering. The immersive nature of the programs demands specialist environments such as a simulated Mars environment, Mission Control and Space Laboratory, thus restricting these programs to a physical location and limiting student access to the programs. To move beyond these limitations, VSSEC worked with its university partners to develop a web-based mission that delivered the benefits of scenario-based learning within a school environment. The Robotic Mission to Mars allows students to remotely control a real rover, developed by the Australian Centre for Field Robotics (ACFR), on the VSSEC Mars surface. After completing a pre-mission training program and site selection activity, students take on the roles of scientists and engineers in Mission Control to complete a mission and collect data for further analysis. Mission Control is established using software developed by the ACRI Games Technology Lab at La Trobe University using the principles of serious gaming. The software allows students to control the rover, monitor its systems and collect scientific data for analysis. This program encourages students to work scientifically and explores the interaction between scientists and engineers. This paper presents the development of the program, including the involvement of university students in the development of the rover, the software, and the collation of the scientific data. It also presents the results of the trial phase of this program including the impact on student engagement and learning outcomes.
MR-Tandem: parallel X!Tandem using Hadoop MapReduce on Amazon Web Services
Pratt, Brian; Howbert, J. Jeffry; Tasman, Natalie I.; Nilsson, Erik J.
2012-01-01
Summary: MR-Tandem adapts the popular X!Tandem peptide search engine to work with Hadoop MapReduce for reliable parallel execution of large searches. MR-Tandem runs on any Hadoop cluster but offers special support for Amazon Web Services for creating inexpensive on-demand Hadoop clusters, enabling search volumes that might not otherwise be feasible with the compute resources a researcher has at hand. MR-Tandem is designed to drop in wherever X!Tandem is already in use and requires no modification to existing X!Tandem parameter files, and only minimal modification to X!Tandem-based workflows. Availability and implementation: MR-Tandem is implemented as a lightly modified X!Tandem C++ executable and a Python script that drives Hadoop clusters including Amazon Web Services (AWS) Elastic Map Reduce (EMR), using the modified X!Tandem program as a Hadoop Streaming mapper and reducer. The modified X!Tandem C++ source code is Artistic licensed, supports pluggable scoring, and is available as part of the Sashimi project at http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/trans_proteomic_pipeline/extern/xtandem/. The MR-Tandem Python script is Apache licensed and available as part of the Insilicos Cloud Army project at http://ica.svn.sourceforge.net/viewvc/ica/trunk/mr-tandem/. Full documentation and a windows installer that configures MR-Tandem, Python and all necessary packages are available at this same URL. Contact: brian.pratt@insilicos.com PMID:22072385
Dynamically Coupled Food-web and Hydrodynamic Modeling with ADH-CASM
NASA Astrophysics Data System (ADS)
Piercy, C.; Swannack, T. M.
2012-12-01
Oysters and freshwater mussels are "ecological engineers," modifying the local water quality by filtering zooplankton and other suspended particulate matter from the water column and flow hydraulics by impinging on the near-bed flow environment. The success of sessile, benthic invertebrates such as oysters depends on environmental factors including but not limited to temperature, salinity, and flow regime. Typically food-web and other types of ecological models use flow and water quality data as direct input without regard to the feedback between the ecosystem and the physical environment. The USACE-ERDC has developed a coupled hydrodynamic-ecological modeling approach that dynamically couples a 2-D hydrodynamic and constituent transport model, Adaptive Hydraulics (ADH), with a bioenergetics food-web model, the Comprehensive Aquatics Systems Model (CASM), which captures the dynamic feedback between aquatic ecological systems and the environment. We present modeling results from restored oyster reefs in the Great Wicomico River on the western shore of the Chesapeake Bay, which quantify ecosystem services such as the influence of the benthic ecosystem on water quality. Preliminary results indicate that while the influence of oyster reefs on bulk flow dynamics is limited due to the localized influence of oyster reefs, large reefs and the associated benthic ecosystem can create measurable changes in the concentrations of nitrogen, phosphorus, and carbon in the areas around reefs. We also present a sensitivity analysis to quantify the relative sensitivity of the coupled ADH-CASM model to both hydrodynamic and ecological parameter choice.
Interactive 3d Landscapes on Line
NASA Astrophysics Data System (ADS)
Fanini, B.; Calori, L.; Ferdani, D.; Pescarin, S.
2011-09-01
The paper describes challenges identified while developing browser embedded 3D landscape rendering applications, our current approach and work-flow and how recent development in browser technologies could affect. All the data, even if processed by optimization and decimation tools, result in very huge databases that require paging, streaming and Level-of-Detail techniques to be implemented to allow remote web based real time fruition. Our approach has been to select an open source scene-graph based visual simulation library with sufficient performance and flexibility and adapt it to the web by providing a browser plug-in. Within the current Montegrotto VR Project, content produced with new pipelines has been integrated. The whole Montegrotto Town has been generated procedurally by CityEngine. We used this procedural approach, based on algorithms and procedures because it is particularly functional to create extensive and credible urban reconstructions. To create the archaeological sites we used optimized mesh acquired with laser scanning and photogrammetry techniques whereas to realize the 3D reconstructions of the main historical buildings we adopted computer-graphic software like blender and 3ds Max. At the final stage, semi-automatic tools have been developed and used up to prepare and clusterise 3D models and scene graph routes for web publishing. Vegetation generators have also been used with the goal of populating the virtual scene to enhance the user perceived realism during the navigation experience. After the description of 3D modelling and optimization techniques, the paper will focus and discuss its results and expectations.
Context-Aware Online Commercial Intention Detection
NASA Astrophysics Data System (ADS)
Hu, Derek Hao; Shen, Dou; Sun, Jian-Tao; Yang, Qiang; Chen, Zheng
With more and more commercial activities moving onto the Internet, people tend to purchase what they need through Internet or conduct some online research before the actual transactions happen. For many Web users, their online commercial activities start from submitting a search query to search engines. Just like the common Web search queries, the queries with commercial intention are usually very short. Recognizing the queries with commercial intention against the common queries will help search engines provide proper search results and advertisements, help Web users obtain the right information they desire and help the advertisers benefit from the potential transactions. However, the intentions behind a query vary a lot for users with different background and interest. The intentions can even be different for the same user, when the query is issued in different contexts. In this paper, we present a new algorithm framework based on skip-chain conditional random field (SCCRF) for automatically classifying Web queries according to context-based online commercial intention. We analyze our algorithm performance both theoretically and empirically. Extensive experiments on several real search engine log datasets show that our algorithm can improve more than 10% on F1 score than previous algorithms on commercial intention detection.
75 FR 26321 - Seventeenth Plenary Meeting: RTCA Special Committee 203: Unmanned Aircraft Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-11
... RTCA Workspace Web Tool Special Committee Status Overview Workgroup Updates WG1--Systems Engineering..., Washington, DC 20036; telephone (202) 833-9339; fax (202) 833-9434; Web site http://www.rtca.org...
Intelligent web image retrieval system
NASA Astrophysics Data System (ADS)
Hong, Sungyong; Lee, Chungwoo; Nah, Yunmook
2001-07-01
Recently, the web sites such as e-business sites and shopping mall sites deal with lots of image information. To find a specific image from these image sources, we usually use web search engines or image database engines which rely on keyword only retrievals or color based retrievals with limited search capabilities. This paper presents an intelligent web image retrieval system. We propose the system architecture, the texture and color based image classification and indexing techniques, and representation schemes of user usage patterns. The query can be given by providing keywords, by selecting one or more sample texture patterns, by assigning color values within positional color blocks, or by combining some or all of these factors. The system keeps track of user's preferences by generating user query logs and automatically add more search information to subsequent user queries. To show the usefulness of the proposed system, some experimental results showing recall and precision are also explained.
The Number of Scholarly Documents on the Public Web
Khabsa, Madian; Giles, C. Lee
2014-01-01
The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%. PMID:24817403
The number of scholarly documents on the public web.
Khabsa, Madian; Giles, C Lee
2014-01-01
The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.
ERIC Educational Resources Information Center
Kim, Sun Hyung; Kang, Jeong Won; Kroenlein, Kenneth; Magee, Joseph W.; Diky, Vladimir; Muzny, Chris D.; Kazakov, Andrei F.; Chirico, Robert D.; Frenkel, Michael
2013-01-01
We review the concept of uncertainty for thermophysical properties and its critical impact for engineering applications in the core courses of chemical engineering education. To facilitate the translation of developments to engineering education, we employ NIST Web Thermo Tables to furnish properties data with their associated expanded…
Taming the Information Jungle with WWW Search Engines.
ERIC Educational Resources Information Center
Repman, Judi; And Others
1997-01-01
Because searching the Web with different engines often produces different results, the best strategy is to learn how each engine works. Discusses comparing search engines; qualities to consider (ease of use, relevance of hits, and speed); and six of the most popular search tools (Yahoo, Magellan. InfoSeek, Alta Vista, Lycos, and Excite). Lists…
Internet Search Engines - Fluctuations in Document Accessibility.
ERIC Educational Resources Information Center
Mettrop, Wouter; Nieuwenhuysen, Paul
2001-01-01
Reports an empirical investigation of the consistency of retrieval through Internet search engines. Evaluates 13 engines: AltaVista, EuroFerret, Excite, HotBot, InfoSeek, Lycos, MSN, NorthernLight, Snap, WebCrawler, and three national Dutch engines: Ilse, Search.nl and Vindex. The focus is on a characteristic related to size: the degree of…
Clinical software development for the Web: lessons learned from the BOADICEA project
2012-01-01
Background In the past 20 years, society has witnessed the following landmark scientific advances: (i) the sequencing of the human genome, (ii) the distribution of software by the open source movement, and (iii) the invention of the World Wide Web. Together, these advances have provided a new impetus for clinical software development: developers now translate the products of human genomic research into clinical software tools; they use open-source programs to build them; and they use the Web to deliver them. Whilst this open-source component-based approach has undoubtedly made clinical software development easier, clinical software projects are still hampered by problems that traditionally accompany the software process. This study describes the development of the BOADICEA Web Application, a computer program used by clinical geneticists to assess risks to patients with a family history of breast and ovarian cancer. The key challenge of the BOADICEA Web Application project was to deliver a program that was safe, secure and easy for healthcare professionals to use. We focus on the software process, problems faced, and lessons learned. Our key objectives are: (i) to highlight key clinical software development issues; (ii) to demonstrate how software engineering tools and techniques can facilitate clinical software development for the benefit of individuals who lack software engineering expertise; and (iii) to provide a clinical software development case report that can be used as a basis for discussion at the start of future projects. Results We developed the BOADICEA Web Application using an evolutionary software process. Our approach to Web implementation was conservative and we used conventional software engineering tools and techniques. The principal software development activities were: requirements, design, implementation, testing, documentation and maintenance. The BOADICEA Web Application has now been widely adopted by clinical geneticists and researchers. BOADICEA Web Application version 1 was released for general use in November 2007. By May 2010, we had > 1200 registered users based in the UK, USA, Canada, South America, Europe, Africa, Middle East, SE Asia, Australia and New Zealand. Conclusions We found that an evolutionary software process was effective when we developed the BOADICEA Web Application. The key clinical software development issues identified during the BOADICEA Web Application project were: software reliability, Web security, clinical data protection and user feedback. PMID:22490389
Clinical software development for the Web: lessons learned from the BOADICEA project.
Cunningham, Alex P; Antoniou, Antonis C; Easton, Douglas F
2012-04-10
In the past 20 years, society has witnessed the following landmark scientific advances: (i) the sequencing of the human genome, (ii) the distribution of software by the open source movement, and (iii) the invention of the World Wide Web. Together, these advances have provided a new impetus for clinical software development: developers now translate the products of human genomic research into clinical software tools; they use open-source programs to build them; and they use the Web to deliver them. Whilst this open-source component-based approach has undoubtedly made clinical software development easier, clinical software projects are still hampered by problems that traditionally accompany the software process. This study describes the development of the BOADICEA Web Application, a computer program used by clinical geneticists to assess risks to patients with a family history of breast and ovarian cancer. The key challenge of the BOADICEA Web Application project was to deliver a program that was safe, secure and easy for healthcare professionals to use. We focus on the software process, problems faced, and lessons learned. Our key objectives are: (i) to highlight key clinical software development issues; (ii) to demonstrate how software engineering tools and techniques can facilitate clinical software development for the benefit of individuals who lack software engineering expertise; and (iii) to provide a clinical software development case report that can be used as a basis for discussion at the start of future projects. We developed the BOADICEA Web Application using an evolutionary software process. Our approach to Web implementation was conservative and we used conventional software engineering tools and techniques. The principal software development activities were: requirements, design, implementation, testing, documentation and maintenance. The BOADICEA Web Application has now been widely adopted by clinical geneticists and researchers. BOADICEA Web Application version 1 was released for general use in November 2007. By May 2010, we had > 1200 registered users based in the UK, USA, Canada, South America, Europe, Africa, Middle East, SE Asia, Australia and New Zealand. We found that an evolutionary software process was effective when we developed the BOADICEA Web Application. The key clinical software development issues identified during the BOADICEA Web Application project were: software reliability, Web security, clinical data protection and user feedback.
Grooker, KartOO, Addict-o-Matic and More: Really Different Search Engines
ERIC Educational Resources Information Center
Descy, Don E.
2009-01-01
There are hundreds of unique search engines in the United States and thousands of unique search engines around the world. If people get into search engines designed just to search particular web sites, the number is in the hundreds of thousands. This article looks at: (1) clustering search engines, such as KartOO (www.kartoo.com) and Grokker…
Do Pazo-Oubiña, F; Calvo Pita, C; Puigventós Latorre, F; Periañez-Párraga, L; Ventayol Bosch, P
2011-01-01
To identify publishers of pharmacotherapeutic information not found in biomedical journals that focuses on evaluating and providing advice on medicines and to develop a search engine to access this information. Compiling web sites that publish information on the rational use of medicines and have no commercial interests. Free-access web sites in Spanish, Galician, Catalan or English. Designing a search engine using the Google "custom search" application. Overall 159 internet addresses were compiled and were classified into 9 labels. We were able to recover the information from the selected sources using a search engine, which is called "AlquimiA" and available from http://www.elcomprimido.com/FARHSD/AlquimiA.htm. The main sources of pharmacotherapeutic information not published in biomedical journals were identified. The search engine is a useful tool for searching and accessing "grey literature" on the internet. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.
Variability of patient spine education by Internet search engine.
Ghobrial, George M; Mehdi, Angud; Maltenfort, Mitchell; Sharan, Ashwini D; Harrop, James S
2014-03-01
Patients are increasingly reliant upon the Internet as a primary source of medical information. The educational experience varies by search engine, search term, and changes daily. There are no tools for critical evaluation of spinal surgery websites. To highlight the variability between common search engines for the same search terms. To detect bias, by prevalence of specific kinds of websites for certain spinal disorders. Demonstrate a simple scoring system of spinal disorder website for patient use, to maximize the quality of information exposed to the patient. Ten common search terms were used to query three of the most common search engines. The top fifty results of each query were tabulated. A negative binomial regression was performed to highlight the variation across each search engine. Google was more likely than Bing and Yahoo search engines to return hospital ads (P=0.002) and more likely to return scholarly sites of peer-reviewed lite (P=0.003). Educational web sites, surgical group sites, and online web communities had a significantly higher likelihood of returning on any search, regardless of search engine, or search string (P=0.007). Likewise, professional websites, including hospital run, industry sponsored, legal, and peer-reviewed web pages were less likely to be found on a search overall, regardless of engine and search string (P=0.078). The Internet is a rapidly growing body of medical information which can serve as a useful tool for patient education. High quality information is readily available, provided that the patient uses a consistent, focused metric for evaluating online spine surgery information, as there is a clear variability in the way search engines present information to the patient. Published by Elsevier B.V.
A Modular Framework for Transforming Structured Data into HTML with Machine-Readable Annotations
NASA Astrophysics Data System (ADS)
Patton, E. W.; West, P.; Rozell, E.; Zheng, J.
2010-12-01
There is a plethora of web-based Content Management Systems (CMS) available for maintaining projects and data, i.a. However, each system varies in its capabilities and often content is stored separately and accessed via non-uniform web interfaces. Moving from one CMS to another (e.g., MediaWiki to Drupal) can be cumbersome, especially if a large quantity of data must be adapted to the new system. To standardize the creation, display, management, and sharing of project information, we have assembled a framework that uses existing web technologies to transform data provided by any service that supports the SPARQL Protocol and RDF Query Language (SPARQL) queries into HTML fragments, allowing it to be embedded in any existing website. The framework utilizes a two-tier XML Stylesheet Transformation (XSLT) that uses existing ontologies (e.g., Friend-of-a-Friend, Dublin Core) to interpret query results and render them as HTML documents. These ontologies can be used in conjunction with custom ontologies suited to individual needs (e.g., domain-specific ontologies for describing data records). Furthermore, this transformation process encodes machine-readable annotations, namely, the Resource Description Framework in attributes (RDFa), into the resulting HTML, so that capable parsers and search engines can extract the relationships between entities (e.g, people, organizations, datasets). To facilitate editing of content, the framework provides a web-based form system, mapping each query to a dynamically generated form that can be used to modify and create entities, while keeping the native data store up-to-date. This open framework makes it easy to duplicate data across many different sites, allowing researchers to distribute their data in many different online forums. In this presentation we will outline the structure of queries and the stylesheets used to transform them, followed by a brief walkthrough that follows the data from storage to human- and machine-accessible web page. We conclude with a discussion on content caching and steps toward performing queries across multiple domains.
Qiao, Liang; Li, Ying; Chen, Xin; Yang, Sheng; Gao, Peng; Liu, Hongjun; Feng, Zhengquan; Nian, Yongjian; Qiu, Mingguo
2015-09-01
There are various medical image sharing and electronic whiteboard systems available for diagnosis and discussion purposes. However, most of these systems ask clients to install special software tools or web plug-ins to support whiteboard discussion, special medical image format, and customized decoding algorithm of data transmission of HRIs (high-resolution images). This limits the accessibility of the software running on different devices and operating systems. In this paper, we propose a solution based on pure web pages for medical HRIs lossless sharing and e-whiteboard discussion, and have set up a medical HRI sharing and e-whiteboard system, which has four-layered design: (1) HRIs access layer: we improved an tile-pyramid model named unbalanced ratio pyramid structure (URPS), to rapidly share lossless HRIs and to adapt to the reading habits of users; (2) format conversion layer: we designed a format conversion engine (FCE) on server side to real time convert and cache DICOM tiles which clients requesting with window-level parameters, to make browsers compatible and keep response efficiency to server-client; (3) business logic layer: we built a XML behavior relationship storage structure to store and share users' behavior, to keep real time co-browsing and discussion between clients; (4) web-user-interface layer: AJAX technology and Raphael toolkit were used to combine HTML and JavaScript to build client RIA (rich Internet application), to meet clients' desktop-like interaction on any pure webpage. This system can be used to quickly browse lossless HRIs, and support discussing and co-browsing smoothly on any web browser in a diversified network environment. The proposal methods can provide a way to share HRIs safely, and may be used in the field of regional health, telemedicine and remote education at a low cost. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Earth Science Mining Web Services
NASA Astrophysics Data System (ADS)
Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.
2008-12-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Earth Science Mining Web Services
NASA Technical Reports Server (NTRS)
Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken
2008-01-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
COEUS: “semantic web in a box” for biomedical applications
2012-01-01
Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/. PMID:23244467
COEUS: "semantic web in a box" for biomedical applications.
Lopes, Pedro; Oliveira, José Luís
2012-12-17
As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.
Adaptive Failure Compensation for Aircraft Tracking Control Using Engine Differential Based Model
NASA Technical Reports Server (NTRS)
Liu, Yu; Tang, Xidong; Tao, Gang; Joshi, Suresh M.
2006-01-01
An aircraft model that incorporates independently adjustable engine throttles and ailerons is employed to develop an adaptive control scheme in the presence of actuator failures. This model captures the key features of aircraft flight dynamics when in the engine differential mode. Based on this model an adaptive feedback control scheme for asymptotic state tracking is developed and applied to a transport aircraft model in the presence of two types of failures during operation, rudder failure and aileron failure. Simulation results are presented to demonstrate the adaptive failure compensation scheme.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-26
... electronic form, will be posted on the NRC Web site and on the Federal rulemaking Web site http://www... that they do not want publicly disclosed. Federal Rulemaking Web site: Go to http://www.regulations.gov... CONTACT: Mr. Ian C. Jung, Chief, Instrumentation, Controls and Electrical Engineering Branch 2, Division...
The Invisible Web: Uncovering Information Sources Search Engines Can't See.
ERIC Educational Resources Information Center
Sherman, Chris; Price, Gary
This book takes a detailed look at the nature and extent of the Invisible Web, and offers pathfinders for accessing the valuable information it contains. It is designed to fit the needs of both novice and advanced Web searchers. Chapter One traces the development of the Internet and many of the early tools used to locate and share information via…
Improving Concept-Based Web Image Retrieval by Mixing Semantically Similar Greek Queries
ERIC Educational Resources Information Center
Lazarinis, Fotis
2008-01-01
Purpose: Image searching is a common activity for web users. Search engines offer image retrieval services based on textual queries. Previous studies have shown that web searching is more demanding when the search is not in English and does not use a Latin-based language. The aim of this paper is to explore the behaviour of the major search…
ERIC Educational Resources Information Center
Bilal, Dania
2002-01-01
Reports findings of a three-part research project that examined the information seeking behavior and success of 22 seventh-grade science students in using the Web. Discusses problems encountered, including inadequate knowledge of how to use the search engine and poor level of research skills; and considers implications for Web training and system…
ERIC Educational Resources Information Center
Perkins, John
Museums hold enormous amounts of information in collections management systems and publish academic and scholarly research in print journals, exhibition catalogs, virtual museum presentations, and community publications. Much of this rich content is unavailable to web search engines or otherwise gets lost in the vastness of the World Wide Web. The…
Adaptive control of a jet turboshaft engine driving a variable pitch propeller using multiple models
NASA Astrophysics Data System (ADS)
Ahmadian, Narjes; Khosravi, Alireza; Sarhadi, Pouria
2017-08-01
In this paper, a multiple model adaptive control (MMAC) method is proposed for a gas turbine engine. The model of a twin spool turbo-shaft engine driving a variable pitch propeller includes various operating points. Variations in fuel flow and propeller pitch inputs produce different operating conditions which force the controller to be adopted rapidly. Important operating points are three idle, cruise and full thrust cases for the entire flight envelope. A multi-input multi-output (MIMO) version of second level adaptation using multiple models is developed. Also, stability analysis using Lyapunov method is presented. The proposed method is compared with two conventional first level adaptation and model reference adaptive control techniques. Simulation results for JetCat SPT5 turbo-shaft engine demonstrate the performance and fidelity of the proposed method.
Geo-Engineering through Internet Informatics (GEMINI)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doveton, John H.; Watney, W. Lynn
The program, for development and methodologies, was a 3-year interdisciplinary effort to develop an interactive, integrated Internet Website named GEMINI (Geo-Engineering Modeling through Internet Informatics) that would build real-time geo-engineering reservoir models for the Internet using the latest technology in Web applications.
Professional Development in Adapted Physical Education with Graduate Web-Based Professional Learning
ERIC Educational Resources Information Center
Sato, Takahiro; Haegele, Justin A.
2017-01-01
Background: The field of adapted physical education (APE) has long struggled to overcome significant and persistent personnel shortages [Healy, S., M. E. Block, and J. Judge. 2014. "Certified Adapted Physical Educator's Perceptions of Advantages and Disadvantages of Online Teacher Development." "Palaestra" 28 (4): 14-16].…
Web-Based System for Adaptable Rubrics: Case Study on CAD Assessment
ERIC Educational Resources Information Center
Company, Pedro; Contero, Manuel; Otey, Jeffrey; Camba, Jorge D.; Agost, María-Jesús; Pérez-López, David
2017-01-01
This paper describes the implementation and testing of our concept of adaptable rubrics, defined as analytical rubrics that arrange assessment criteria at multiple levels that can be expanded on demand. Because of its adaptable nature, these rubrics cannot be implemented in paper formats, neither are they supported by current Learning Management…
Effective Levels of Adaptation to Different Types of Users in Interactive Museum Systems.
ERIC Educational Resources Information Center
Paterno, F.; Mancini, C.
2000-01-01
Discusses user interaction with museum application interfaces and emphasizes the importance of adaptable and adaptive interfaces to meet differing user needs. Considers levels of support that can be given to different users during navigation of museum hypermedia information, using examples from the Web site for the Marble Museum (Italy).…
Examining the Impact of Adaptively Faded Worked Examples on Student Learning Outcomes
ERIC Educational Resources Information Center
Flores, Raymond; Inan, Fethi
2014-01-01
The purpose of this study was to explore effective ways to design guided practices within a web-based mathematics problem solving tutorial. Specifically, this study examined student learning outcome differences between two support designs (e.g. adaptively faded and fixed). In the adaptively faded design, students were presented with problems in…
ERIC Educational Resources Information Center
Inan, Fethi A.; Flores, Raymond; Ari, Fatih; Arslan-Ari, Ismahan
2011-01-01
The purpose of this study was to document the design and development of an adaptive system which individualizes instruction such as content, interfaces, instructional strategies, and resources dependent on two factors, namely student motivation and prior knowledge levels. Combining adaptive hypermedia methods with strategies proposed by…
Using Item Response Theory and Adaptive Testing in Online Career Assessment
ERIC Educational Resources Information Center
Betz, Nancy E.; Turner, Brandon M.
2011-01-01
The present article describes the potential utility of item response theory (IRT) and adaptive testing for scale evaluation and for web-based career assessment. The article describes the principles of both IRT and adaptive testing and then illustrates these with reference to data analyses and simulation studies of the Career Confidence Inventory…
CropEx Web-Based Agricultural Monitoring and Decision Support
NASA Technical Reports Server (NTRS)
Harvey. Craig; Lawhead, Joel
2011-01-01
CropEx is a Web-based agricultural Decision Support System (DSS) that monitors changes in crop health over time. It is designed to be used by a wide range of both public and private organizations, including individual producers and regional government offices with a vested interest in tracking vegetation health. The database and data management system automatically retrieve and ingest data for the area of interest. Another stores results of the processing and supports the DSS. The processing engine will allow server-side analysis of imagery with support for image sub-setting and a set of core raster operations for image classification, creation of vegetation indices, and change detection. The system includes the Web-based (CropEx) interface, data ingestion system, server-side processing engine, and a database processing engine. It contains a Web-based interface that has multi-tiered security profiles for multiple users. The interface provides the ability to identify areas of interest to specific users, user profiles, and methods of processing and data types for selected or created areas of interest. A compilation of programs is used to ingest available data into the system, classify that data, profile that data for quality, and make data available for the processing engine immediately upon the data s availability to the system (near real time). The processing engine consists of methods and algorithms used to process the data in a real-time fashion without copying, storing, or moving the raw data. The engine makes results available to the database processing engine for storage and further manipulation. The database processing engine ingests data from the image processing engine, distills those results into numerical indices, and stores each index for an area of interest. This process happens each time new data is ingested and processed for the area of interest, and upon subsequent database entries, the database processing engine qualifies each value for each area of interest and conducts a logical processing of results indicating when and where thresholds are exceeded. Reports are provided at regular, operator-determined intervals that include variances from thresholds and links to view raw data for verification, if necessary. The technology and method of development allow the code base to easily be modified for varied use in the real-time and near-real-time processing environments. In addition, the final product will be demonstrated as a means for rapid draft assessment of imagery.
WebGLORE: a Web service for Grid LOgistic REgression
Jiang, Wenchao; Li, Pinghao; Wang, Shuang; Wu, Yuan; Xue, Meng; Ohno-Machado, Lucila; Jiang, Xiaoqian
2013-01-01
WebGLORE is a free web service that enables privacy-preserving construction of a global logistic regression model from distributed datasets that are sensitive. It only transfers aggregated local statistics (from participants) through Hypertext Transfer Protocol Secure to a trusted server, where the global model is synthesized. WebGLORE seamlessly integrates AJAX, JAVA Applet/Servlet and PHP technologies to provide an easy-to-use web service for biomedical researchers to break down policy barriers during information exchange. Availability and implementation: http://dbmi-engine.ucsd.edu/webglore3/. WebGLORE can be used under the terms of GNU general public license as published by the Free Software Foundation. Contact: x1jiang@ucsd.edu PMID:24072732
... on the relevance score as determined by the search engine. Generally, the first document in the first results ... Spanish . snippet Brief result summary generated by the search engine that provides a preview of the relevant content ...
Real-time WebRTC-based design for a telepresence wheelchair.
Van Kha Ly Ha; Rifai Chai; Nguyen, Hung T
2017-07-01
This paper presents a novel approach to the telepresence wheelchair system which is capable of real-time video communication and remote interaction. The investigation of this emerging technology aims at providing a low-cost and efficient way for assisted-living of people with disabilities. The proposed system has been designed and developed by deploying the JavaScript with Hyper Text Markup Language 5 (HTML5) and Web Real-time Communication (WebRTC) in which the adaptive rate control algorithm for video transmission is invoked. We conducted experiments in real-world environments, and the wheelchair was controlled from a distance using the Internet browser to compare with existing methods. The results show that the adaptively encoded video streaming rate matches the available bandwidth. The video streaming is high-quality with approximately 30 frames per second (fps) and round trip time less than 20 milliseconds (ms). These performance results confirm that the WebRTC approach is a potential method for developing a telepresence wheelchair system.
Xiao, Yong-Hong; Zunic-Kosi, Alenka; Zhang, Long-Wa; Prentice, Thomas R; McElfresh, J Steven; Chinta, Satya P; Zou, Yun-Fan; Millar, Jocelyn G
2015-12-01
Males of many spider species risk being attacked and cannibalized while searching for, courting, and mating with conspecific females. However, there are exceptions. We show that the funnel-web spider, Hololena curta, has 3 adaptations that minimize risk to males during courtship and mating, and enhance reproductive success. First, males detected chemical or tactile signals associated with webs of virgin females, and differentiated them from webs of mated females, enabling males to increase encounter rates with virgin females and avoid aggressive mated females. Second, males produced stereotyped vibrational signals during courting which induced female quiescence and suppressed female aggression. Third, when touched by males, sexually receptive females entered a cataleptic state, allowing males to safely approach and copulate. Because males can mate multiple times and the sex ratio in natural populations of H. curta is female biased, overall reproductive output is likely increased by males of this species avoiding sexual cannibalism. © 2015 Institute of Zoology, Chinese Academy of Sciences.
NASA Astrophysics Data System (ADS)
Navarro-Arribas, Guillermo; Garcia-Alfaro, Joaquin
Web browsers are becoming the universal interface to reach applications and services related with these systems. Different browsing contexts may be required in order to reach them, e.g., use of VPN tunnels, corporate proxies, anonymisers, etc. By browsing context we mean how the user browsers the Web, including mainly the concrete configuration of its browser. When the context of the browser changes, its security requirements also change. In this work, we present the use of authorisation policies to automatise the process of controlling the resources of a Web browser when its context changes. The objective of our proposal is oriented towards easing the adaptation to the security requirements of the new context and enforce them in the browser without the need for user intervention. We present a concrete application of our work as a plug-in for the adaption of security requirements in Mozilla/Firefox browser when a context of anonymous navigation through the Tor network is enabled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrington, David B
2012-06-07
Development of a fractional step, a Predictor-Corrector Split (PCS), or what is often known as a projection method combining hp-adaptive system in a Finite Element Method (FEM) for combustion modeling has been achieved. This model will advance the accuracy and range of applicability of the KIVA combustion model and software used typically for internal combustion engine modeling. This abstract describes a PCS hp-adaptive FEM model for turbulent reactive flow spanning all velocity regimes and fluids that is being developed for the new KIVA combustion algorithm, particularly for internal combustion engines. The method and general solver is applicable to Newtonian andmore » non- Newtonian fluids and also for incompressible solids, porous media, solidification modeling, and fluid structure interaction problems. The fuel injection and injector modeling could easily benefit from the capability of solving the fluid structure interaction problem in an injector, helping to understand cycle to cycle variation and cavitation. This is just one example where the new algorithm differs from the old, in addition to handling Conjugate Heat Transfer (CHT), although there a numerous features that makes the new system more robust and accurate. In these ways, the PCS hp-adaptive algorithm does not compete with commercial software packages, those often used in conjunction with the currently distributed KIVA codes for engine combustion modeling. In addition, choosing a local ALE method on immersed moving parts represented by overset grid that is 2nd order spatially accurate, allows for easy grid generation from CAD to fluid grid while also provide for robustness in handling any possible moving parts configuration without any code modifications. The combined methods employed produce a minimal amount of computational effort as compared to fully resolved grids at the same accuracy. We demonstrate the solver on benchmark problems for the all flow regimes as follows: (1) 2-D backward-facing step using h-adaption, (2) 2-D driven cavity, (3) 2-D natural convection in a differentially heat cavity with h-adaptation, (4) NACA 0012 airfoil in 2-D, (5) supersonic flows over compression ramps, (6) 2-D natural convection in a differentially heat cavity with hp-adaptation, (7) 3-D natural convection in a differentially heat sphere with hp-adaptation. In addition, we show the new moving parts algorithm for working for a 2-D piston; the immersed moving parts method also for valves and pistons, vanes, etc... The movement is performed using an overset grid method and is 2nd order accurate in space, and never produces a tangle grid, that is, robust system at any resolution and any parts configuration. We also show CHT for the currently distributed KIVA-4mpi software and some fairly automatic grid generation using Sandia's Cubit unstructured grid generator. A new electronic web-based manual for KIVA-4 has been developed as well.« less
Frouz, Jan; Thébault, Elisa; Pižl, Václav; Adl, Sina; Cajthaml, Tomáš; Baldrián, Petr; Háněl, Ladislav; Starý, Josef; Tajovský, Karel; Materna, Jan; Nováková, Alena; de Ruiter, Peter C
2013-01-01
Parameters characterizing the structure of the decomposer food web, biomass of the soil microflora (bacteria and fungi) and soil micro-, meso- and macrofauna were studied at 14 non-reclaimed 1- 41-year-old post-mining sites near the town of Sokolov (Czech Republic). These observations on the decomposer food webs were compared with knowledge of vegetation and soil microstructure development from previous studies. The amount of carbon entering the food web increased with succession age in a similar way as the total amount of C in food web biomass and the number of functional groups in the food web. Connectance did not show any significant changes with succession age, however. In early stages of the succession, the bacterial channel dominated the food web. Later on, in shrub-dominated stands, the fungal channel took over. Even later, in the forest stage, the bacterial channel prevailed again. The best predictor of fungal bacterial ratio is thickness of fermentation layer. We argue that these changes correspond with changes in topsoil microstructure driven by a combination of plant organic matter input and engineering effects of earthworms. In early stages, soil is alkaline, and a discontinuous litter layer on the soil surface promotes bacterial biomass growth, so the bacterial food web channel can dominate. Litter accumulation on the soil surface supports the development of the fungal channel. In older stages, earthworms arrive, mix litter into the mineral soil and form an organo-mineral topsoil, which is beneficial for bacteria and enhances the bacterial food web channel.
Frouz, Jan; Thébault, Elisa; Pižl, Václav; Adl, Sina; Cajthaml, Tomáš; Baldrián, Petr; Háněl, Ladislav; Starý, Josef; Tajovský, Karel; Materna, Jan; Nováková, Alena; de Ruiter, Peter C.
2013-01-01
Parameters characterizing the structure of the decomposer food web, biomass of the soil microflora (bacteria and fungi) and soil micro-, meso- and macrofauna were studied at 14 non-reclaimed 1– 41-year-old post-mining sites near the town of Sokolov (Czech Republic). These observations on the decomposer food webs were compared with knowledge of vegetation and soil microstructure development from previous studies. The amount of carbon entering the food web increased with succession age in a similar way as the total amount of C in food web biomass and the number of functional groups in the food web. Connectance did not show any significant changes with succession age, however. In early stages of the succession, the bacterial channel dominated the food web. Later on, in shrub-dominated stands, the fungal channel took over. Even later, in the forest stage, the bacterial channel prevailed again. The best predictor of fungal bacterial ratio is thickness of fermentation layer. We argue that these changes correspond with changes in topsoil microstructure driven by a combination of plant organic matter input and engineering effects of earthworms. In early stages, soil is alkaline, and a discontinuous litter layer on the soil surface promotes bacterial biomass growth, so the bacterial food web channel can dominate. Litter accumulation on the soil surface supports the development of the fungal channel. In older stages, earthworms arrive, mix litter into the mineral soil and form an organo-mineral topsoil, which is beneficial for bacteria and enhances the bacterial food web channel. PMID:24260281
77 FR 9868 - Airworthiness Directives; Honeywell International Inc. Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-21
... Airworthiness Directives; Honeywell International Inc. Turbofan Engines AGENCY: Federal Aviation Administration... -5BR series turbofan engines. This proposed AD was prompted by a report of a rim/web separation of a..., -4R, -5AR, -5BR, and -5R series turbofan engines, with an LPT1 rotor assembly, P/N 3074748-4, 3074748...
Archuleta, Christy-Ann M.; Eames, Deanna R.
2009-01-01
The Rio Grande Civil Works and Restoration Projects Web Application, developed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers (USACE) Albuquerque District, is designed to provide publicly available information through the Internet about civil works and restoration projects in the Rio Grande Basin. Since 1942, USACE Albuquerque District responsibilities have included building facilities for the U.S. Army and U.S. Air Force, providing flood protection, supplying water for power and public recreation, participating in fire remediation, protecting and restoring wetlands and other natural resources, and supporting other government agencies with engineering, contracting, and project management services. In the process of conducting this vast array of engineering work, the need arose for easily tracking the locations of and providing information about projects to stakeholders and the public. This fact sheet introduces a Web application developed to enable users to visualize locations and search for information about USACE (and some other Federal, State, and local) projects in the Rio Grande Basin in southern Colorado, New Mexico, and Texas.
Wong, Vincent; Smith, Ariella J; Hawkins, Nicholas J; Kumar, Rakesh K; Young, Noel; Kyaw, Merribel; Velan, Gary M
2015-10-01
Diagnostic imaging is under-represented in medical curricula globally. Adaptive tutorials, online intelligent tutoring systems that provide a personalized learning experience, have the potential to bridge this gap. However, there is limited evidence of their effectiveness for learning about diagnostic imaging. We performed a randomized mixed methods crossover trial to determine the impact of adaptive tutorials on perceived engagement and understanding of the appropriate use and interpretation of common diagnostic imaging investigations. Although concurrently engaged in disparate blocks of study, 99 volunteer medical students (from years 1-4 of the 6-year program) were randomly allocated to one of two groups. In the first arm of the trial on chest X-rays, one group received access to an adaptive tutorial, whereas the other received links to an existing peer-reviewed Web resource. These two groups crossed over in the second arm of the trial, which focused on computed tomography scans of the head, chest, and abdomen. At the conclusion of each arm of the trial, both groups completed an examination-style assessment, comprising questions both related and unrelated to the topics covered by the relevant adaptive tutorial. Online questionnaires were used to evaluate student perceptions of both learning resources. In both arms of the trial, the group using adaptive tutorials obtained significantly higher assessment scores than controls. This was because of higher assessment scores by senior students in the adaptive tutorial group when answering questions related to topics covered in those tutorials. Furthermore, students indicated significantly better engagement with adaptive tutorials than the Web resource and rated the tutorials as a significantly more valuable tool for learning. Medical students overwhelmingly accept adaptive tutorials for diagnostic imaging. The tutorials significantly improve the understanding of diagnostic imaging by senior students. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Investigating Knowledge Creation Technology in an Engineering Course
ERIC Educational Resources Information Center
Jalonen, Satu; Lakkala, Minna; Paavola, Sami
2011-01-01
The aim of the present study was to examine the technological affordances of a web-based collaborative learning technology, Knowledge Practices Environment (KPE), for supporting different dimensions of knowledge creation processes. KPE was used by engineering students in a practically oriented undergraduate engineering course. The study…
Fuel control for gas turbine engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stearns, C.F.; Tutherly, H.W.
1983-12-27
The basic gas turbine engine hydromechanical fuel control is adaptable to different engine configurations such as turbofan, turboprop and turboshaft engines by incorporating in the main housing those elements having a commonality to all engine configurations and providing a removable block for each configuration having the necessary control elements and flow passages required for that particular configuration. That is to say, a block with the elements peculiar to a turbofan engine could be replaced by a mating block that includes those elements peculiar to a turboshaft engine in adapting the control for a turboshaft configuration. Similarly another block with thosemore » elements peculiar to a turboprop engine could replace any of the other blocks in adapting the control to a turboprop configuration. Obviously the basic control has the necessary flow passages terminating at the interface with the block and these flow passages mate with corresponding passages in the block.« less
40 CFR 63.820 - Applicability.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., or wide-web flexographic printing presses are operated, and (2) Each new and existing facility at which publication rotogravure, product and packaging rotogravure, or wide-web flexographic printing... specify, using the best monitoring methods and engineering judgment, the amount of excess emissions that...
Speeding on the Information Superhighway: Strategies for Saving Time on the Web.
ERIC Educational Resources Information Center
Colaric, Susan M.; Carr-Chellman, Alison A.
2000-01-01
Outlines ways to make online searching more efficient. Highlights include starting with printed materials; online reference libraries; subject directories such as Yahoo; search engines; evaluating Web sites, including reliability; bookmarking helpful sites; and using links. (LRW)
Discovery of Sound in the Sea (DOSITS) Web Site Development
2016-06-20
of Sound in the Sea (DOSITS) Web Site Development 5b. GRANT NUMBER NOOO 14- 12- 1-0169 5c. PROGRAM ELEMENT NUMBER 6 . AUTHOR(S) 5d. PROJECT NUMBER...DOSITS) Web Site Development ONR Grant N00014-12-1-0169 Period of Performance: 01 January 2012- 31 December 2014 Principal Investigator Peter F...The web traffic numbers exclude all known search engines and other spiders, as well as traffic from the University of Rhode Island Graduate School
Investigating the Use of Inquiry & Web-Based Activities with Inclusive Biology Learners
ERIC Educational Resources Information Center
Bodzin, Alec M.; Waller, Patricia L.; Edwards, Lana; Darlene Kale, Santoro
2007-01-01
A Web-integrated biology program is used to explore how to best assist inclusive high school students to learn biology with inquiry-based activities. Classroom adaptations and instructional strategies teachers may use to assist in promoting biology learning with inclusive learners are discussed.
Re-Examining Cognition during Student-Centered, Web-Based Learning
ERIC Educational Resources Information Center
Hannafin, Michael; Hannafin, Kathleen; Gabbitas, Bruce
2009-01-01
During student-centered learning, the individual assumes responsibility for determining learning goals, monitoring progress toward meeting goals, adjusting or adapting approaches as warranted, and determining when individual goals have been adequately addressed. This can be particularly challenging while learning from the World-Wide Web, where…
Bringing simulation to engineers in the field: a Web 2.0 approach.
Haines, Robert; Khan, Kashif; Brooke, John
2009-07-13
Field engineers working on water distribution systems have to implement day-to-day operational decisions. Since pipe networks are highly interconnected, the effects of such decisions are correlated with hydraulic and water quality conditions elsewhere in the network. This makes the provision of predictive decision support tools (DSTs) for field engineers critical to optimizing the engineering work on the network. We describe how we created DSTs to run on lightweight mobile devices by using the Web 2.0 technique known as Software as a Service. We designed our system following the architectural style of representational state transfer. The system not only displays static geographical information system data for pipe networks, but also dynamic information and prediction of network state, by invoking and displaying the results of simulations running on more powerful remote resources.
The Web and Information Literacy: Scaffolding the use of Web Sources in a Project-Based Curriculum
ERIC Educational Resources Information Center
Walton, Marion; Archer, Arlene
2004-01-01
In this article we describe and discuss a three-year case study of a course in web literacy, part of the academic literacy curriculum for first-year engineering students at the University of Cape Town (UCT). Because they are seen as practical knowledge, not theoretical, information skills tend to be devalued at university and rendered invisible to…
Experimental evaluation of two 36 inch by 47 inch graphite/epoxy sandwich shear webs
NASA Technical Reports Server (NTRS)
Bush, H. G.
1975-01-01
The design is described and test of two large (36 in. x 47 in.) graphite/epoxy sandwich shear webs. One sandwich web was designed to exhibit strength failure of the facings at a shear load of 7638 lbs/in., which is a characteristic loading for the space shuttle orbiter main engine thrust beam structure. The second sandwich web was designed to exhibit general instability failure at a shear load of 5000 lbs/in., to identify problem areas of stability critical sandwich webs and to assess the adequacy of contemporary analysis techniques.
An Integrated Approach to Recruiting and Retaining Appalachian Engineering Students
ERIC Educational Resources Information Center
Winn, Gary; Hensel, Robin; Curtis, Reagan; Taylor, Lydotta M.; Cilento, Gene
2012-01-01
Recruiting and retaining Appalachian engineering students is difficult for a variety of ecological and cultural reasons. At West Virginia University an NSF STEP grant has allowed the development of specific interventions to evolve from an ecological model we describe here. The interventions include web-based, realistic engineering design exercises…
40 CFR 1039.625 - What requirements apply under the program for equipment-manufacturer flexibility?
Code of Federal Regulations, 2011 CFR
2011-07-01
... is manufactured. (4) An e-mail address and phone number to contact for further information, or a Web... secondary engine manufacturers. (l) [Reserved] (m) Additional exemptions for technical or engineering... avoided with reasonable discretion have resulted in technical or engineering problems that prevent you...
Publications - AR 2010-C | Alaska Division of Geological & Geophysical
Alaska's Mineral Industry Reports AKGeology.info Rare Earth Elements WebGeochem Engineering Geology Alaska content DGGS AR 2010-C Publication Details Title: Engineering Geology FY11 project descriptions Authors , Engineering Geology FY11 project descriptions, in DGGS Staff, Alaska Division of Geological & Geophysical
New Information Technologies: Possible Implications for Libraries.
ERIC Educational Resources Information Center
de Stricker, Ulla
1998-01-01
Presents observations about developments in information technology that will influence the information industry and libraries of the future. Discusses search engine capabilities; push technology; electronic commerce; WebTV; and optical discs with links to Web sites. Ten figures provide illustrations and charts. (AEF)
Design of web platform for science and engineering in the model of open market
NASA Astrophysics Data System (ADS)
Demichev, A. P.; Kryukov, A. P.
2016-09-01
This paper presents a design and operation algorithms of a web-platform for convenient, secure and effective remote interaction on the principles of the open market of users and providers of scientific application software and databases.
How To Get Your Web Page Noticed.
ERIC Educational Resources Information Center
Schrock, Kathleen
1997-01-01
Presents guidelines for making a Web site noticeable. Discusses submitting the URL to directories, links, and announcement lists, and sending the site over the server via FTP to search engines. Describes how to index the site with "Title,""Heading," and "Meta" tags. (AEF)
COMPUTER-AIDED SCIENCE POLICY ANALYSIS AND RESEARCH (WEBCASPAR)
WebCASPAR is a database system containing information about academic science and engineering resources and is available on the World Wide Web. Included in the database is information from several of SRS's academic surveys plus information from a variety of other sources, includin...
Federal Register 2010, 2011, 2012, 2013, 2014
2005-11-16
... Reference System (TRS) [see http://www.epa.gov/trs ] in order to better support future semantic Web needs... creation of glossaries for Web pages and documents, a common vocabulary for search engines, and in the...
WebGURU: The Web-Based Guide to Research for Undergraduates
ERIC Educational Resources Information Center
Mabrouk, Patricia; McIntyre, Ryan; Virrankoski, Milena; Jeliffe, Kirsten
2007-01-01
Undergraduate research (UR) is widely promoted by faculty, administrators, institutions of higher learning, government laboratories, private industry, professional associations, and funding agencies as an effective method of training college students pursuing careers in science, technology, engineering, and mathematics (STEM) disciplines at…
Web Services Integration on the Fly
2008-12-01
NETBEANS 6.1 AND VERSION CONTROL............................................28 1. NetBeans Integrated Development Environment (IDE) ................28 2...Forward and Reverse Engineering...................................................28 3. Implementation using NetBeans ...29 4. Subversion (SVN) for Version Control in NetBeans ......................29 O. PROTÉGÉ AUTHORING TOOL FOR SEMANTIC WEB
Evaluation Framework Based on Fuzzy Measured Method in Adaptive Learning Systems
ERIC Educational Resources Information Center
Ounaies, Houda Zouari; Jamoussi, Yassine; Ben Ghezala, Henda Hajjami
2008-01-01
Currently, e-learning systems are mainly web-based applications and tackle a wide range of users all over the world. Fitting learners' needs is considered as a key issue to guaranty the success of these systems. Many researches work on providing adaptive systems. Nevertheless, evaluation of the adaptivity is still in an exploratory phase.…
Review of Extracting Information From the Social Web for Health Personalization
Karlsen, Randi; Bonander, Jason
2011-01-01
In recent years the Web has come into its own as a social platform where health consumers are actively creating and consuming Web content. Moreover, as the Web matures, consumers are gaining access to personalized applications adapted to their health needs and interests. The creation of personalized Web applications relies on extracted information about the users and the content to personalize. The Social Web itself provides many sources of information that can be used to extract information for personalization apart from traditional Web forms and questionnaires. This paper provides a review of different approaches for extracting information from the Social Web for health personalization. We reviewed research literature across different fields addressing the disclosure of health information in the Social Web, techniques to extract that information, and examples of personalized health applications. In addition, the paper includes a discussion of technical and socioethical challenges related to the extraction of information for health personalization. PMID:21278049
User Interface Design in Medical Distributed Web Applications.
Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara
2016-01-01
User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services.
Thinking about feathers: Adaptations of Golden Eagle rectrices
Ellis, D.H.; Lish, J.W.
2006-01-01
The striking black and white plumage of the juvenile Golden Eagle (Aquila chrysaetos) provides an excellent opportunity to examine the possible selective forces influencing the strategic placement of dark pigment in birds. The conflict between opposing selective pressures (first, toward large white patches, which may allay aggression in adults, and second, toward dark plumage to promote camouflage and limit solar and abrasive wear) provides the stage whereon are revealed a score of pigmentation traits of potential adaptive value. The general pigmentation trend is for zones that are more exposed to the sun to be darker than elsewhere. More specifically: (1) for rectrices and remiges, outer webs are darker than inner; (2) for those few feathers (e.g., central rectrices, some scapulars, and some tertials), where both inner and outer webs are heavily and nearly equally solar exposed, pigmentation is supplied similarly on both webs; (3) outermost primaries and rectrices are darkest of all and are structurally similar; (4) for central rectrices, subject to high levels of abrasion with substrate, the tip is paler (resultant flexibility may limit breakage); and (5) pigment is heavier along or on the rachis than on the webs. Many of the traits listed above for the Golden Eagle are also found in other families of birds. Traits of the tail common to many species were a terminal pale tip, a subterminal dark band, rachis darker than vane, and outer webs darker than inner for both remiges and rectrices. The most widespread traits likely have adaptive value. ?? 2006 The Raptor Research Foundation, Inc.
Evaluation of Web Accessibility of Consumer Health Information Websites
Zeng, Xiaoming; Parmanto, Bambang
2003-01-01
The objectives of the study are to construct a comprehensive framework for web accessibility evaluation, to evaluate the current status of web accessibility of consumer health information websites and to investigate the relationship between web accessibility and property of the websites. We selected 108 consumer health information websites from the directory service of a Web search engine. We used Web accessibility specifications to construct a framework for the measurement of Web Accessibility Barriers (WAB) of website. We found that none of the websites is completely accessible to people with disabilities, but governmental and educational health information websites exhibit better performance on web accessibility than other categories of websites. We also found that the correlation between the WAB score and the popularity of a website is statistically significant. PMID:14728272
Evaluation of web accessibility of consumer health information websites.
Zeng, Xiaoming; Parmanto, Bambang
2003-01-01
The objectives of the study are to construct a comprehensive framework for web accessibility evaluation, to evaluate the current status of web accessibility of consumer health information websites and to investigate the relationship between web accessibility and property of the websites. We selected 108 consumer health information websites from the directory service of a Web search engine. We used Web accessibility specifications to construct a framework for the measurement of Web Accessibility Barriers (WAB) of website. We found that none of the websites is completely accessible to people with disabilities, but governmental and educational health information websites exhibit better performance on web accessibility than other categories of websites. We also found that the correlation between the WAB score and the popularity of a website is statistically significant.
Comparing Web, Group and Telehealth Formats of a Military Parenting Program
2016-06-01
AWARD NUMBER: W81XWH-14-1-0143 TITLE: Comparing Web, Group and Telehealth Formats of a Military Parenting Program PRINCIPAL INVESTIGATOR...SUBTITLE Comparing Web, Group and Telehealth Formats of a Military 5a. CONTRACT NUMBER W81XWH-14-1-0143 Parenting Program 5b. GRANT NUMBER 5c...conducting a three- group, two-site randomized trial to test the comparative effectiveness of three ADAPT delivery approaches for 360 reintegrating
An Expertise Recommender using Web Mining
NASA Technical Reports Server (NTRS)
Joshi, Anupam; Chandrasekaran, Purnima; ShuYang, Michelle; Ramakrishnan, Ramya
2001-01-01
This report explored techniques to mine web pages of scientists to extract information regarding their expertise, build expertise chains and referral webs, and semi automatically combine this information with directory information services to create a recommender system that permits query by expertise. The approach included experimenting with existing techniques that have been reported in research literature in recent past , and adapted them as needed. In addition, software tools were developed to capture and use this information.
A web ontology for brain trauma patient computer-assisted rehabilitation.
Zikos, Dimitrios; Galatas, George; Metsis, Vangelis; Makedon, Fillia
2013-01-01
In this paper we describe CABROnto, which is a web ontology for the semantic representation of the computer assisted brain trauma rehabilitation. This is a novel and emerging domain, since it employs the use of robotic devices, adaptation software and machine learning to facilitate interactive and adaptive rehabilitation care. We used Protégé 4.2 and Protégé-Owl schema editor. The primary goal of this ontology is to enable the reuse of the domain knowledge. CABROnto has nine main classes, more than 50 subclasses, existential and cardinality restrictions. The ontology can be found online at Bioportal.
‘Sciencenet’—towards a global search and share engine for all scientific knowledge
Lütjohann, Dominic S.; Shah, Asmi H.; Christen, Michael P.; Richter, Florian; Knese, Karsten; Liebel, Urban
2011-01-01
Summary: Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, ‘Sciencenet’, which facilitates rapid searching over this large data space. By ‘bringing the search engine to the data’, we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. Availability and Implementation: The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the ‘AskMe’ experiment publisher is written in Python 2.7, and the backend ‘YaCy’ search engine is based on Java 1.6. Contact: urban.liebel@kit.edu Supplementary Material: Detailed instructions and descriptions can be found on the project homepage: http://sciencenet.kit.edu. PMID:21493657
Web 2.0 and Emerging Technologies in Online Learning
ERIC Educational Resources Information Center
Diaz, Veronica
2010-01-01
As online learning continues to grow, so do the free or nearly free Web 2.0 and emerging online learning technologies available to faculty and students. This chapter explores the implementation process and corresponding considerations of adapting such tools for teaching and learning. Issues addressed include copyright, intellectual property,…
Using Web-Based Foreign Advertisements in International Marketing Classes
ERIC Educational Resources Information Center
Ryan, Jason
2011-01-01
The author examines the use of the Web-based foreign advertisements for enhancing the international awareness of undergraduate marketing students. An analysis compares the adaptation of advertisements for identical products to the cultural perceptions and values of consumers in different countries. In a sample of 110 international marketing…
Mendeley: Creating Communities of Scholarly Inquiry through Research Collaboration
ERIC Educational Resources Information Center
Zaugg, Holt; West, Richard E.; Tateishi, Isaku; Randall, Daniel L.
2010-01-01
Mendeley is a free, web-based tool for organizing research citations and annotating their accompanying PDF articles. Adapting Web 2.0 principles for academic scholarship, Mendeley integrates the management of the research articles with features for collaborating with researchers locally and worldwide. In this article the features of Mendeley are…
Propagation of Species at Risk Atlantic Pigtoe on Military Installations
2010-04-30
adult mussels are constantly being adapted to meet the needs of each species. 6.0. PROCEDURES 6.1. Potential Host Fish Collection In order to...Watson) o VA freshwater mussel web atlas (Watson) o Freshwater gastropod of VA web atlas, Atlantic slope (Watson) 5) Other Issues of Interest
Speakeasy Studio and Cafe: Information Literacy, Web-based Library Instruction, and Technology.
ERIC Educational Resources Information Center
Jacobs, Mark
2001-01-01
Discussion of academic library instruction and information literacy focuses on a Web-based program developed at Washington State University called Speakeasy Studio and Cafe that is used for bibliographic instruction. Highlights include the research process; asking the right question; and adapting to students' differing learning styles. (LRW)
Under Constriction: Colonization and Synthetic Institutionalization of Web Space.
ERIC Educational Resources Information Center
Killoran, John B.
2002-01-01
Draws on a study of 106 personal homepages in order to present a theoretical model of how citizens' potentials as Web publishers are being compromised by the leadership of institutional discourses. Proposes an analogous process of synthetic institutionalization, in which personal homepage publishers affect institutional poses by adapting Norman…
Leveraging the Semantic Web for Adaptive Education
ERIC Educational Resources Information Center
Kravcik, Milos; Gasevic, Dragan
2007-01-01
In the area of technology-enhanced learning reusability and interoperability issues essentially influence the productivity and efficiency of learning and authoring solutions. There are two basic approaches how to overcome these problems--one attempts to do it via standards and the other by means of the Semantic Web. In practice, these approaches…
ERIC Educational Resources Information Center
Rushton, Erin E.; Kelehan, Martha Daisy; Strong, Marcy A.
2008-01-01
Search engine use is one of the most popular online activities. According to a recent OCLC report, nearly all students start their electronic research using a search engine instead of the library Web site. Instead of viewing search engines as competition, however, librarians at Binghamton University Libraries decided to employ search engine…
Research on Agriculture Domain Meta-Search Engine System
NASA Astrophysics Data System (ADS)
Xie, Nengfu; Wang, Wensheng
The rapid growth of agriculture web information brings a fact that search engine can not return a satisfied result for users’ queries. In this paper, we propose an agriculture domain search engine system, called ADSE, that can obtains results by an advance interface to several searches and aggregates them. We also discuss two key technologies: agriculture information determination and engine.
Photonics applications and web engineering: WILGA Summer 2016
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2016-09-01
Wilga Summer 2016 Symposium on Photonics Applications and Web Engineering was held on 29 May - 06 June. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2016 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.
Photonics applications and web engineering: WILGA Summer 2015
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2015-09-01
Wilga Summer 2015 Symposium on Photonics Applications and Web Engineering was held on 23-31 May. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2015 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.
Mobile medical visual information retrieval.
Depeursinge, Adrien; Duc, Samuel; Eggel, Ivan; Müller, Henning
2012-01-01
In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical information retrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.
40 CFR 1042.130 - Installation instructions for vessel manufacturers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... engine in a way that makes the engine's emission control information label hard to read during normal... equivalent format. For example, you may post instructions on a publicly available Web site for downloading or...
Study on online community user motif using web usage mining
NASA Astrophysics Data System (ADS)
Alphy, Meera; Sharma, Ajay
2016-04-01
The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.
Adjacency and Proximity Searching in the Science Citation Index and Google
2005-01-01
major database search engines , including commercial S&T database search engines (e.g., Science Citation Index (SCI), Engineering Compendex (EC...PubMed, OVID), Federal agency award database search engines (e.g., NSF, NIH, DOE, EPA, as accessed in Federal R&D Project Summaries), Web search Engines (e.g...searching. Some database search engines allow strict constrained co- occurrence searching as a user option (e.g., OVID, EC), while others do not (e.g., SCI
40 CFR 1065.310 - Torque calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... maintenance. Use good engineering judgment to repeat the calibration. Follow the torque transducer... the U.S. National Oceanographic and Atmospheric Administration's surface gravity prediction Web site at http://www.ngs.noaa.gov/cgi-bin/grav_pdx.prl. If this Web site is unavailable, you may use the...
18 CFR 157.34 - Notice of open season.
Code of Federal Regulations, 2010 CFR
2010-04-01
... provide reasonable public notice of an open season through methods including postings on Internet Web..., engineering, design, capacity or operational constraints, or accommodating the request would otherwise... not posted on the open season Internet Web site or that is otherwise also available to the general...
18 CFR 157.34 - Notice of open season.
Code of Federal Regulations, 2011 CFR
2011-04-01
... including postings on Internet Web sites, press releases, direct mail solicitations, and other advertising... open season or allocation of capacity that is not posted on the open season Internet Web site or that... due to economic, engineering, design, capacity or operational constraints, or accommodating the...
40 CFR 1065.310 - Torque calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... maintenance. Use good engineering judgment to repeat the calibration. Follow the torque transducer... the U.S. National Oceanographic and Atmospheric Administration's surface gravity prediction Web site at http://www.ngs.noaa.gov/cgi-bin/grav_pdx.prl. If this Web site is unavailable, you may use the...
Web-based Interactive Simulator for Rotating Machinery.
ERIC Educational Resources Information Center
Sirohi, Vijayalaxmi
1999-01-01
Baroma (Balance of Rotating Machinery), the Web-based educational engineering interactive software for teaching/learning combines didactical and software ergonomical approaches. The software in tutorial form simulates a problem using Visual Interactive Simulation in graphic display, and animation is brought about through graphical user interface…
Web information retrieval based on ontology
NASA Astrophysics Data System (ADS)
Zhang, Jian
2013-03-01
The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.
Analysis and Development of a Web-Enabled Planning and Scheduling Database Application
2013-09-01
establishes an entity—relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web- enabled interface for...development, develop, design, process, re- engineering, reengineering, MySQL , structured query language, SQL, myPHPadmin. 15. NUMBER OF PAGES 107 16...relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web-enabled interface for the population of
Surfing the Web for Science: Early Data on the Users and Uses of The Why Files.
ERIC Educational Resources Information Center
Eveland, William P., Jr.; Dunwoody, Sharon
1998-01-01
This brief offers an initial look at one science site on the World Wide Web (The Why Files: http://whyfiles.news.wise.edu) in order to consider the educational potential of this technology. The long-term goal of the studies of this site is to understand how the World Wide Web can be used to enhance science, mathematics, engineering, and technology…
Start Your Search Engines. Part One: Taming Google--and Other Tips to Master Web Searches
ERIC Educational Resources Information Center
Adam, Anna; Mowers, Helen
2008-01-01
There are a lot of useful tools on the Web, all those social applications, and the like. Still most people go online for one thing--to perform a basic search. For most fact-finding missions, the Web is there. But--as media specialists well know--the sheer wealth of online information can hamper efforts to focus on a few reliable references.…
Studies on behaviour of information to extract the meaning behind the behaviour
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.; Syah, R.; Elveny, M.
2017-01-01
Web as social media can be used as a reference for determining social behaviour. However, the information extraction involves a search engine is not easy to give that picture. There are several properties of the search engine to be formally disclosed to provide assurance that the information is feasible. Although quite a lot of research that has revealed the interest of the Web as social media, but a few of them that have revealed behaviour of information related to social behaviour. In this case, it needs the formal steps to present possibilities related properties. There are 12 properties that are interconnected as behaviour of information and then it reveals several meanings based on the simulation results of any search engine.
Zhao, Ying-Jun; Zeng, Yan; Chen, Lei; Dong, Yang; Wang, Wen
2014-12-01
As an ancient arthropod with a history of 390 million years, spiders evolved numerous morphological forms resulting from adaptation to different environments. The venom and silk of spiders, which have promising commercial applications in agriculture, medicine and engineering fields, are of special interests to researchers. However, little is known about their genomic components, which hinders not only understanding spider biology but also utilizing their valuable genes. Here we report on deep sequenced and de novo assembled transcriptomes of three orb-web spider species, Gasteracantha arcuata, Nasoonaria sinensis and Gasteracantha hasselti which are distributed in tropical forests of south China. With Illumina paired-end RNA-seq technology, 54 871, 101 855 and 75 455 unigenes for the three spider species were obtained, respectively, among which 9 300, 10 001 and 10 494 unique genes are annotated, respectively. From these annotated unigenes, we comprehensively analyzed silk and toxin gene components and structures for the three spider species. Our study provides valuable transcriptome data for three spider species which previously lacked any genetic/genomic data. The results have laid the first fundamental genomic basis for exploiting gene resources from these spiders. © 2013 Institute of Zoology, Chinese Academy of Sciences.
BioCarian: search engine for exploratory searches in heterogeneous biological databases.
Zaki, Nazar; Tennakoon, Chandana
2017-10-02
There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search on previously published viral integration data and were able to deduce the main conclusions of the original publication. BioCarian is accessible via http://www.biocarian.com . We have developed a search engine to explore RDF databases that can be used by both novice and advanced users.
Tags Extarction from Spatial Documents in Search Engines
NASA Astrophysics Data System (ADS)
Borhaninejad, S.; Hakimpour, F.; Hamzei, E.
2015-12-01
Nowadays the selective access to information on the Web is provided by search engines, but in the cases which the data includes spatial information the search task becomes more complex and search engines require special capabilities. The purpose of this study is to extract the information which lies in spatial documents. To that end, we implement and evaluate information extraction from GML documents and a retrieval method in an integrated approach. Our proposed system consists of three components: crawler, database and user interface. In crawler component, GML documents are discovered and their text is parsed for information extraction; storage. The database component is responsible for indexing of information which is collected by crawlers. Finally the user interface component provides the interaction between system and user. We have implemented this system as a pilot system on an Application Server as a simulation of Web. Our system as a spatial search engine provided searching capability throughout the GML documents and thus an important step to improve the efficiency of search engines has been taken.
A Web Centric Architecture for Deploying Multi-Disciplinary Engineering Design Processes
NASA Technical Reports Server (NTRS)
Woyak, Scott; Kim, Hongman; Mullins, James; Sobieszczanski-Sobieski, Jaroslaw
2004-01-01
There are continuous needs for engineering organizations to improve their design process. Current state of the art techniques use computational simulations to predict design performance, and optimize it through advanced design methods. These tools have been used mostly by individual engineers. This paper presents an architecture for achieving results at an organization level beyond individual level. The next set of gains in process improvement will come from improving the effective use of computers and software within a whole organization, not just for an individual. The architecture takes advantage of state of the art capabilities to produce a Web based system to carry engineering design into the future. To illustrate deployment of the architecture, a case study for implementing advanced multidisciplinary design optimization processes such as Bi-Level Integrated System Synthesis is discussed. Another example for rolling-out a design process for Design for Six Sigma is also described. Each example explains how an organization can effectively infuse engineering practice with new design methods and retain the knowledge over time.
Translation, cross-cultural adaptation and validation of the Diabetes Empowerment Scale – Short Form
Chaves, Fernanda Figueredo; Reis, Ilka Afonso; Pagano, Adriana Silvina; Torres, Heloísa de Carvalho
2017-01-01
ABSTRACT OBJECTIVE To translate, cross-culturally adapt and validate the Diabetes Empowerment Scale – Short Form for assessment of psychosocial self-efficacy in diabetes care within the Brazilian cultural context. METHODS Assessment of the instrument’s conceptual equivalence, as well as its translation and cross-cultural adaptation were performed following international standards. The Expert Committee’s assessment of the translated version was conducted through a web questionnaire developed and applied via the web tool e-Surv. The cross-culturally adapted version was used for the pre-test, which was carried out via phone call in a group of eleven health care service users diagnosed with type 2 diabetes mellitus. The pre-test results were examined by a group of experts, composed by health care consultants, applied linguists and statisticians, aiming at an adequate version of the instrument, which was subsequently used for test and retest in a sample of 100 users diagnosed with type 2 diabetes mellitus via phone call, their answers being recorded by the web tool e-Surv. Internal consistency and reproducibility of analysis were carried out within the statistical programming environment R. RESULTS Face and content validity were attained and the Brazilian Portuguese version, entitled Escala de Autoeficácia em Diabetes – Versão Curta, was established. The scale had acceptable internal consistency with Cronbach’s alpha of 0.634 (95%CI 0.494– 0.737), while the correlation of the total score in the two periods was considered moderate (0.47). The intraclass correlation coefficient was 0.50. CONCLUSIONS The translated and cross-culturally adapted version of the instrument to spoken Brazilian Portuguese was considered valid and reliable to be used for assessment within the Brazilian population diagnosed with type 2 diabetes mellitus. The use of a web tool (e-Surv) for recording the Expert Committee responses as well as the responses in the validation tests proved to be a reliable, safe and innovative method. PMID:28355337
A web search on environmental topics: what is the role of ranking?
Covolo, Loredana; Filisetti, Barbara; Mascaretti, Silvia; Limina, Rosa Maria; Gelatti, Umberto
2013-12-01
Although the Internet is easy to use, the mechanisms and logic behind a Web search are often unknown. Reliable information can be obtained, but it may not be visible as the Web site is not located in the first positions of search results. The possible risks of adverse health effects arising from environmental hazards are issues of increasing public interest, and therefore the information about these risks, particularly on topics for which there is no scientific evidence, is very crucial. The aim of this study was to investigate whether the presentation of information on some environmental health topics differed among various search engines, assuming that the most reliable information should come from institutional Web sites. Five search engines were used: Google, Yahoo!, Bing, Ask, and AOL. The following topics were searched in combination with the word "health": "nuclear energy," "electromagnetic waves," "air pollution," "waste," and "radon." For each topic three key words were used. The first 30 search results for each query were considered. The ranking variability among the search engines and the type of search results were analyzed for each topic and for each key word. The ranking of institutional Web sites was given particular consideration. Variable results were obtained when surfing the Internet on different environmental health topics. Multivariate logistic regression analysis showed that, when searching for radon and air pollution topics, it is more likely to find institutional Web sites in the first 10 positions compared with nuclear power (odds ratio=3.4, 95% confidence interval 2.1-5.4 and odds ratio=2.9, 95% confidence interval 1.8-4.7, respectively) and also when using Google compared with Bing (odds ratio=3.1, 95% confidence interval 1.9-5.1). The increasing use of online information could play an important role in forming opinions. Web users should become more aware of the importance of finding reliable information, and health institutions should be able to make that information more visible.
NASA Astrophysics Data System (ADS)
Haghnevis, Moeed
The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The approach allows the examination of complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. This research describes and uses the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build a framework and platform for ECAS. While this framework focuses on the critical factors of an engineered system, it also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems. In this way concepts of complex systems science are adapted to management science and system of systems engineering. In particular an integrated consumer-based optimization and agent-based modeling (ABM) platform is presented that enables managers to predict and partially control patterns of behaviors in ECASs. Demonstrated on the U.S. electricity markets, ABM is integrated with normative and subjective decision behavior recommended by the U.S. Department of Energy (DOE) and Federal Energy Regulatory Commission (FERC). The approach integrates social networks, social science, complexity theory, and diffusion theory. Furthermore, it has unique and significant contribution in exploring and representing concrete managerial insights for ECASs and offering new optimized actions and modeling paradigms in agent-based simulation.
2012-01-01
Background Semantic Web technology can considerably catalyze translational genetics and genomics research in medicine, where the interchange of information between basic research and clinical levels becomes crucial. This exchange involves mapping abstract phenotype descriptions from research resources, such as knowledge databases and catalogs, to unstructured datasets produced through experimental methods and clinical practice. This is especially true for the construction of mutation databases. This paper presents a way of harmonizing abstract phenotype descriptions with patient data from clinical practice, and querying this dataset about relationships between phenotypes and genetic variants, at different levels of abstraction. Methods Due to the current availability of ontological and terminological resources that have already reached some consensus in biomedicine, a reuse-based ontology engineering approach was followed. The proposed approach uses the Ontology Web Language (OWL) to represent the phenotype ontology and the patient model, the Semantic Web Rule Language (SWRL) to bridge the gap between phenotype descriptions and clinical data, and the Semantic Query Web Rule Language (SQWRL) to query relevant phenotype-genotype bidirectional relationships. The work tests the use of semantic web technology in the biomedical research domain named cerebrotendinous xanthomatosis (CTX), using a real dataset and ontologies. Results A framework to query relevant phenotype-genotype bidirectional relationships is provided. Phenotype descriptions and patient data were harmonized by defining 28 Horn-like rules in terms of the OWL concepts. In total, 24 patterns of SWQRL queries were designed following the initial list of competency questions. As the approach is based on OWL, the semantic of the framework adapts the standard logical model of an open world assumption. Conclusions This work demonstrates how semantic web technologies can be used to support flexible representation and computational inference mechanisms required to query patient datasets at different levels of abstraction. The open world assumption is especially good for describing only partially known phenotype-genotype relationships, in a way that is easily extensible. In future, this type of approach could offer researchers a valuable resource to infer new data from patient data for statistical analysis in translational research. In conclusion, phenotype description formalization and mapping to clinical data are two key elements for interchanging knowledge between basic and clinical research. PMID:22849591
Xu, Huayong; Yu, Hui; Tu, Kang; Shi, Qianqian; Wei, Chaochun; Li, Yuan-Yuan; Li, Yi-Xue
2013-01-01
We are witnessing rapid progress in the development of methodologies for building the combinatorial gene regulatory networks involving both TFs (Transcription Factors) and miRNAs (microRNAs). There are a few tools available to do these jobs but most of them are not easy to use and not accessible online. A web server is especially needed in order to allow users to upload experimental expression datasets and build combinatorial regulatory networks corresponding to their particular contexts. In this work, we compiled putative TF-gene, miRNA-gene and TF-miRNA regulatory relationships from forward-engineering pipelines and curated them as built-in data libraries. We streamlined the R codes of our two separate forward-and-reverse engineering algorithms for combinatorial gene regulatory network construction and formalized them as two major functional modules. As a result, we released the cGRNB (combinatorial Gene Regulatory Networks Builder): a web server for constructing combinatorial gene regulatory networks through integrated engineering of seed-matching sequence information and gene expression datasets. The cGRNB enables two major network-building modules, one for MPGE (miRNA-perturbed gene expression) datasets and the other for parallel miRNA/mRNA expression datasets. A miRNA-centered two-layer combinatorial regulatory cascade is the output of the first module and a comprehensive genome-wide network involving all three types of combinatorial regulations (TF-gene, TF-miRNA, and miRNA-gene) are the output of the second module. In this article we propose cGRNB, a web server for building combinatorial gene regulatory networks through integrated engineering of seed-matching sequence information and gene expression datasets. Since parallel miRNA/mRNA expression datasets are rapidly accumulated by the advance of next-generation sequencing techniques, cGRNB will be very useful tool for researchers to build combinatorial gene regulatory networks based on expression datasets. The cGRNB web-server is free and available online at http://www.scbit.org/cgrnb.
40 CFR 1045.130 - What installation instructions must I give to vessel manufacturers?
Code of Federal Regulations, 2010 CFR
2010-07-01
... engine's emission control information label hard to read during normal engine maintenance, you must place... equivalent format. For example, you may post instructions on a publicly available Web site for downloading or...
40 CFR 1054.130 - What installation instructions must I give to equipment manufacturers?
Code of Federal Regulations, 2010 CFR
2010-07-01
... makes the engine's emission control information label hard to read during normal engine maintenance, you... in an equivalent format. For example, you may post instructions on a publicly available Web site for...
Our Commitment to Reliable Health and Medical Information
... 000 visitors world-wide per day. HONcode Toolbar: search engine and checker of the certification status Automatically checks ... HONcode status when browsing health web sites. The search engine indexes only HONcode-certified sites. HONcodeHunt currently includes ...
MedlinePlus Connect: Web Application
... will result in a query to the MedlinePlus search engine. If you specify a code and the name/ ... system or problem code, will use the MedlinePlus search engine (English only): https://connect.medlineplus.gov/application?mainSearchCriteria. ...
The Einstein Suite: A Web-Based Tool for Rapid and Collaborative Engineering Design and Analysis
NASA Technical Reports Server (NTRS)
Palmer, Richard S.
1997-01-01
Taken together the components of the Einstein Suite provide two revolutionary capabilities - they have the potential to change the way engineering and financial engineering are performed by: (1) providing currently unavailable functionality, and (2) providing a 10-100 times improvement over currently available but impractical or costly functionality.
49 CFR Appendix C to Part 222 - Guide to Establishing Quiet Zones
Code of Federal Regulations, 2011 CFR
2011-10-01
... Horns will not be subject to annual reviews. (5) The use of FRA's web-based Quiet Zone Calculator is... appendix A (e.g., shorter than required traffic channelization devices), non-engineering ASMs (e.g., programmed law enforcement), and engineering ASMs (i.e., engineering improvements other than modified SSMs...
The Kamusi Project Edit Engine: A Tool for Collaborative Lexicography.
ERIC Educational Resources Information Center
Benjamin, Martin; Biersteker, Ann
2001-01-01
Discusses the design and implementation of the Kamusi Project Edit Engine, a Web-based software system uniquely suited to the needs of Swahili collaborative lexicography. Describes the edit engine, including organization of the lexicon and the mechanics by which participants use the system, discusses philosophical issues confronted in the design,…
Impact of Commercial Search Engines and International Databases on Engineering Teaching and Research
ERIC Educational Resources Information Center
Chanson, Hubert
2007-01-01
For the last three decades, the engineering higher education and professional environments have been completely transformed by the "electronic/digital information revolution" that has included the introduction of personal computer, the development of email and world wide web, and broadband Internet connections at home. Herein the writer compares…
Publications - AR 2011-C | Alaska Division of Geological & Geophysical
Alaska's Mineral Industry Reports AKGeology.info Rare Earth Elements WebGeochem Engineering Geology Alaska content DGGS AR 2011-C Publication Details Title: Engineering Geology FY12 project descriptions Authors Combellick, R.A., 2012, Engineering Geology FY12 project descriptions, in DGGS Staff, Alaska Division of
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
HIDEC F-15 adaptive engine control system flight test results
NASA Technical Reports Server (NTRS)
Smolka, James W.
1987-01-01
NASA-Ames' Highly Integrated Digital Electronic Control (HIDEC) flight test program aims to develop fully integrated airframe, propulsion, and flight control systems. The HIDEC F-15 adaptive engine control system flight test program has demonstrated that significant performance improvements are obtainable through the retention of stall-free engine operation throughout the aircraft flight and maneuver envelopes. The greatest thrust increase was projected for the medium-to-high altitude flight regime at subsonic speed which is of such importance to air combat. Adaptive engine control systems such as the HIDEC F-15's can be used to upgrade the performance of existing aircraft without resort to expensive reengining programs.
The Montage Image Mosaic Toolkit As A Visualization Engine.
NASA Astrophysics Data System (ADS)
Berriman, G. Bruce; Lerias, Angela; Good, John; Mandel, Eric; Pepper, Joshua
2018-01-01
The Montage toolkit has since 2003 been used to aggregate FITS images into mosaics for science analysis. It is now finding application as an engine for image visualization. One important reason is that the functionality developed for creating mosaics is also valuable in image visualization. An equally important (though perhaps less obvious) reason is that Montage is portable and is built on standard astrophysics toolkits, making it very easy to integrate into new environments. Montage models and rectifies the sky background to a common level and thus reveals faint, diffuse features; it offers an adaptive image stretching method that preserves the dynamic range of a FITS image when represented in PNG format; it provides utilities for creating cutouts of large images and downsampled versions of large images that can then be visualized on desktops or in browsers; it contains a fast reprojection algorithm intended for visualization; and it resamples and reprojects images to a common grid for subsequent multi-color visualization.This poster will highlight these visualization capabilities with the following examples:1. Creation of down-sampled multi-color images of a 16-wavelength Infrared Atlas of the Galactic Plane, sampled at 1 arcsec when created2. Integration into web-based image processing environment: JS9 is an interactive image display service for web browsers, desktops and mobile devices. It exploits the flux-preserving reprojection algorithms in Montage to transform diverse images to common image parameters for display. Select Montage programs have been compiled to Javascript/WebAssembly using the Emscripten compiler, which allows our reprojection algorithms to run in browsers at close to native speed.3. Creation of complex sky coverage maps: an multicolor all-sky map that shows the sky coverage of the Kepler and K2, KELT and TESS projects, overlaid on an all-sky 2MASS image.Montage is funded by the National Science Foundation under Grant Number ACI-1642453. JS9 is funded by the Chandra X-ray Center (NAS8-03060) and NASA's Universe of Learning (STScI-509913).
NASA Astrophysics Data System (ADS)
Knox, S.; Meier, P.; Mohammed, K.; Korteling, B.; Matrosov, E. S.; Hurford, A.; Huskova, I.; Harou, J. J.; Rosenberg, D. E.; Thilmant, A.; Medellin-Azuara, J.; Wicks, J.
2015-12-01
Capacity expansion on resource networks is essential to adapting to economic and population growth and pressures such as climate change. Engineered infrastructure systems such as water, energy, or transport networks require sophisticated and bespoke models to refine management and investment strategies. Successful modeling of such complex systems relies on good data management and advanced methods to visualize and share data.Engineered infrastructure systems are often represented as networks of nodes and links with operating rules describing their interactions. Infrastructure system management and planning can be abstracted to simulating or optimizing new operations and extensions of the network. By separating the data storage of abstract networks from manipulation and modeling we have created a system where infrastructure modeling across various domains is facilitated.We introduce Hydra Platform, a Free Open Source Software designed for analysts and modelers to store, manage and share network topology and data. Hydra Platform is a Python library with a web service layer for remote applications, called Apps, to connect. Apps serve various functions including network or results visualization, data export (e.g. into a proprietary format) or model execution. This Client-Server architecture allows users to manipulate and share centrally stored data. XML templates allow a standardised description of the data structure required for storing network data such that it is compatible with specific models.Hydra Platform represents networks in an abstract way and is therefore not bound to a single modeling domain. It is the Apps that create domain-specific functionality. Using Apps researchers from different domains can incorporate different models within the same network enabling cross-disciplinary modeling while minimizing errors and streamlining data sharing. Separating the Python library from the web layer allows developers to natively expand the software or build web-based apps in other languages for remote functionality. Partner CH2M is developing a commercial user-interface for Hydra Platform however custom interfaces and visualization tools can be built. Hydra Platform is available on GitHub while Apps will be shared on a central repository.
Scholarly Research on Educational Adaptation of Social Media: Is There Evidence of Publication Bias?
ERIC Educational Resources Information Center
Piotrowski, Chris
2015-01-01
The sizeable majority of research findings on educational adaptation of social media (SM) is based on college student samples. A cursory review of the extant literature on the educational use of SM appears to convey an uncritical spirit regarding adaptations of modern Web 2.0 technology. This article examines the issue of whether "publication…